Best practices
Best practices for getting current objects and data
When you intend to retrieve a large number of objects in an environment through Connect API, such as Agents, we always recommend using version of the API endpoint for getting all available objects in an environment in one go, instead of fetching each asset object individually.
Doing so will both reduce the time to retrieve all asset objects - and avoids doing looping in your code for retrieval - as well as place less load on the API. These endpoints are:
- Get full list of all Agents for environment
- Get full list of all Applications for environment
- Get full list of all Products for environment
- Get full list of all Product Groups for environment
- Get full list of all user logons
If only a small specific subset of all asset objects will be needed, then it is sensible to make separate calls to retrieve each Agent, Product, etc. object individually.
Suggested frequency
As Applixure processes each environment's data (devices, software etc.) only periodically into information available through the Web UI or Connect API, it does not make sense to poll the list of asset objects from the API too frequently as they are guaranteed to be not changed from the previous run. As Applixure's object graph is stateless in nature, Applixure Connect API does not have a concept for retrieving only the changed (or new) data since the previous run.
Presently, the minimum interval for getting all asset objects should be set to two hours. As the Agents also have an interval by which they send the data to the Applixure backend, the suggested minimum time for re-retrieving asset objects or associated data is four hours.
Suggested backoff time
If the Connect API returns an error condition that is transient (i.e. not "object not found" type situation), the caller should maintain a reasonable backoff or wait time before retrying the operation. Suggested backoff time is 10 seconds, with increasing backoff time if the endpoint continues to report transient error condition.
Rate limits for Connect API
In accessing Connect API, please keep the frequency or the interval of the individual request at a reasonable level so as to not cause unintended denial of service against the API, or cause issues for other users of the API by an excessive request rate.
Individual requests made using the same API credentials should be kept at minimum at least one-second interval from each other. If the rate of request is too high, Connect API may start returning HTTP status code 429 (Too many requests) to the caller, and a progressively increasing cool-off period is applied to the use of those API credentials.
If you are getting a status code 429 response from the Connect API, please consult the "Retry-After" HTTP header in response to learn the time after which any further requests should be issued and the cool-off period is over.
Applixure reserves the right to automatically disable any API credentials that repeatedly issue too many requests despite the rate limiting being used.
Best practices for getting historical objects and data
Historial asset objects, such as for Agents and Product Groups, are created by the Applixure backend from the current day's data at the UTC timezone date rollover.
These historical asset objects generally won't be changed anymore after the fact unless Agent device sends buffered data from multiple preceding days (for example due to being out-of-communication from the Internet for several days), which may augment things like device and software issue events for those days, which weren't originally present in the Agent or Product Group data for particular historical dates. In this case, the historical asset object entries may be updated with the newly received data to make them up-to-date representing information from those past dates.
For this reason it is strongly recommended that callers of Connect API do not retrieve historical data for asset objects too frequently, and that historical information is cached on the caller site.
Unlike the current data that's getting updated as Agents send the data during the same day, historical object entries are most likely not changed from the previous retrieval.
Furthermore, retrieving historical data creates larger pressure against the Applixure backend and the API as the platform is optimized for use of the current information, which means that retrieval time for historical information can be higher and the amount of data returned [as JSON] is significantly larger, even with compression at the HTTP level.
Like with the current data and object, versions of the API endpoints for retrieving historical data for all assets in the environment is strongly recommended against fetching each Agent, Product Group etc. history entry individually in loop.
Special consideration should however be made in larger environments (1000+ Agents) for retrieving only limited amount of history depth in one API call as the amount of information can be significant in size. We recommend fetching a maximum of one week's worth of historical data in one call when using the mass retrieval endpoints, such as when getting getting history information for all Agents by using a date-range parameter:
https://connect.applixure.com/v1/environmentId/agents/history/20201001-20201008
(please note that you will need to calculate appropriate dates for the parameters, in the preceding example range is from October 1st 2020 to October 8th 2020)
Suggested frequency
The minimum interval for getting all historical entries should be set to 24 hours.
Paging number of objects returned
For endpoints returning a potentially large array of objects, such as with endpoints returning a full list of current Agent objects or requests for historical asset data, it may make sense to use an approach of paging through the full array. Returning everything as a single HTTP response can cause it to be inconveniently large to process as the contents are JSON formatted data that, by necessity, has a verbose nature. Even though the response under normal circumstances can be compressed while transmitted over the wire (but still potentially considerably large), on the client side, it is uncompressed text content (in the gigabytes range) that can cause issues with some client libraries.
With paging, you can specify a starting offset into a full array and a limit on the number of objects returned per call. This way, it is possible to iterate the full results in parts that are manageable in size for processing purposes, the cost only being needed to issue some small number of separate requests instead of one.
To page through results, URI parameters offset and/or limit can be added to the endpoint being requested, such as in the following example:
Instead of requesting all agent devices' full information in one go with the following URI:
https://connect.applixure.com/v1/_environmentId_/agents/full
You can split it into multiple requests, getting next 5000 agents at a time using parameterized URIs:
https://connect.applixure.com/v1/environmentId/agents/full?offset=0&limit=5000
https://connect.applixure.com/v1/environmentId/agents/full?offset=5000&limit=5000
https://connect.applixure.com/v1/environmentId/agents/full?offset=10000&limit=5000
etc.
Updated 23 days ago