Go to All Forums

Conflicting On-Premise Pollers

Hi,
we have 2 private networks (AWS) with ssh server hosts, which are both configured as IP 10.0.0.1 . There are other servers in the private networks with dedicated services.

When I run the On-Premise Poller on one host, it correctly reports in Site24x7 as poller with host name ip-10-0-0-1 and IP 10.0.0.1 and all monitors work fine. If I stop this one and start the poller on the other node, the system shows the same poller as connected but the monitors show down status after a while. It seems the system cannot distinguish that these are 2 different networks and pollers. (the down monitors are expected, because they cannot react the services in the other private network). If I run both pollers at the same time, random monitor up and down seems to result.

I tried to debug. I can see that the MONAGENTID and AGENTUID entries in the conf/serveragent.config are the same. I obviously enter the same APIKey (CUSTOMERID).

My question: Is there any way I can make the system distinguish 2 poller on the same hostname? Without changing the hostname/IP address... and without running the pollers on different hosts?
Like (1) Reply
Replies (5)

Hi, 

I would request you to follow the steps given below on the second server and check if the issue persists. 

1. Go to "~/Site24x7OnPremisePoller/Site24x7OnPremisePoller/conf" folder.
2. Edit the serveragent.config file.
3. Change the following parameter "MONAGENTID" and "AGENTUID" = SITE24X7NEW
4. Restart the on-premise poller

Once it is done, a new agent will be considered as second poller. 

Regards, 
Rafee
Like (0) Reply

Hi Rafee,

thanks a lot for the hint. This did change something but unfortunately did not solve the issue. The pollers are still conflicting with each other and making my monitors go up and down randomly.

I now see 2 (not just 1) new location profiles named ip-10-0-0-10 in addition to the one for the first poller. On the On-Premise Poller section, I still only see the one entry for my first poller, not 2, which seems to be the problem to keep them apart.

Note I did not change the config of the first poller. Is there anything else I can do to make them not conflict?

Thanks,
Michael
Like (0) Reply

I found a solution that works for me: when installing the poller on a different host, I noticed that the "MONAGENTID" and "AGENTUID" have a different value in them. I cannot run the poller on that host for firewall reasons, but I could copy the ID value and set it with the 2nd poller. The system now recognizes both pollers.

It seems the poller installer somehow generates these IDs from the hostname/IP, which lead to identical values if 2 systems have the same hostname. I tried manually fudging with the ID value but this did not work and the poller exited. Using the other generated ID value works just fine.

Now am left with the 2 location profiles from the experiment above, which I cannot delete (it says "Cannot delete system generated location profile"), but that is a minor issue that I can live with. So far things seem to work fine.

M
Like (0) Reply

Did you clone the 2nd poller from the 1st?  I had a simmilar issue caused because of cloning our 2nd poller.  My suggestion is to build both pollers from scratch. 

One question.. Why would you want to have two pollers with the same IP address?
Like (0) Reply

Hi- I just downloaded and installed both pollers. I did not build them or clone them. The installer seemed to auto-configure the AGENTUID to the same value. Only when I did that on a different host/IP, it was a different UUID.

We have an integration and a production environment on AWS as separate VPCs and accounts. Both were set up similarly by DevOps, with a front-end node at 10.0.0.10 and then other instances accessible to the front-end node. Kind of a secure access pattern. The front-end node is the one the firewall allows communication with the outside and to each inside node, so it's the place for the OnPremise Poller to run. Other than sharing the same local VPC IPs, both environments are completely isolated.

I've done it once and it works since then, but to me the issue seemed to be that the AGENTUID was somehow generated only from the hostname/IP address and therefore the conflict occurred.
Like (0) Reply

Was this post helpful?