See www.zabbix.com for the official Zabbix site.
these freely can conflict with current api implementation, as they are goals, not docs
Several of the proposals here take slow, high latency links in consideration and assign high priority to working over them.
- Everything is available over the API. Frontend never ever does a direct database query. That includes data retrieval and any configuration changes. As a general test, it should be possible to have frontend on a separate system without any database access configuration.
- For any 'get' method user must be able to choose any fields that have to be returned, except those not supposed to be returned at all (like password hash for user.get). User should get in response only the requested fields and those only. Specifying 'all' should return all possible fields.
- Sorting or filtering options should never change what is sent to user.
- For any filtering or searching options it should be possible to construct custom and/or conditions for all options to minimise amount of data transferred - for example, items with ( key like web ) and ( interval > 600 ) and ( ( type agent ) or ( type agent-active ) ).
- Default returned data is always array. Hash returning may be implemented as an optional parameter. If available, it must be implemented for all methods.
- Upon success of the methods that modify anything, always IDs of elements that were successfully added/updated are returned.
- Make as little parameters mandatory as possible. Always have reasonable defaults. As a rule, if user can create an entity in the frontend without entering custom value for some field, it should have a default also when operating over API. How to manage the defaults, though? Should they be in API or in DB? If in DB, it's probably not feasible to show these same defaults in frontend forms.
- Have as much functionality in the API as possible, and as little in the GUI as possible - API is always closer to the DB, thus less information transmitted and less roundtrips is good for performance. It also makes life easier for everybody interfacing with the API.
- Any non-significant whitespace in JSON call is acceptable and discarded.
- Does it make sense feeding output of one API call directly into another? For example, if we know that we want to disable all unsupported items or delete all hosts whose IP starts with the same number, there isn't much sense in retrieving the IDs first, then doing a second call to do the update operation. Maybe nested API queries?
- If duplicate IDs are passed to any method, that is considered an error and the call should fail.
- Functionality should not be duplicated between methods
- For example, a unlinking templates from other templates should only be implemented in a single method.
- All methods should accept any amount of entities to modify - there should be no duplicated methods that work with single and multiple entities.
- Would it make sense to allow re-negotiation of the auth token for security purposes (if there's a long session, for example)?
- Would there be need for a built-in compression? Could it provide any benefit over what a webserver could provide?
- regexp filtering?
Method and parameter naming
inconsistencies (for example, http://www.zabbix.com/documentation/1.8/api/item/get)
- "itemids" and "webitems". first is an array, second a boolean that toggles inclusion of web items _in addition_ to other items. then there's also "editable" - but this limits subset.
- "filter", "search", and search parameters "startSearch" (i doubt many people could intuitively guess what this one is about) and "excludeSearch"
- countOutput and groupOutput, but select_hosts, select_triggers, select_graphs and select_applications
- true_only - ZBX-3914
Current agreement :
- Parameters are using camelcase (for example, searchWildcardsEnabled)
- Parameters that modify some behaviour start with a word, indicating this (instead of having this word at the end). For example, searchWildcardsEnabled
- If a parameter is a boolean control, it has Enabled string at the end (for example, searchWildcardsEnabled)
All operations must be recorded in the audit log on the API level
General data retrieval rules
Everything should be retrievable
Specific data retrieval rules
While everything should be retrievable, for some categories specific rules might be required.
Both for history and trends data data reduction should be supported. This would allow to compress data further than trends. For that, API client would request data for a period and specify reduction method. Reasoning: if API client wants to draw a graph for a long period of time, it should not receive huge amounts of data. This would also be the foundation of client-side graphs in Zabbix frontend (JS, SVG, whatever).
Available reduction methods:
- Interval - API client specifies what interval it wants between two values.
- Amount of values - API client specifies what is the maximum amount of values it wants.
Reduction method: API user could choose reduction method to use
- Discard - best available values are returned, others discarded
- Calculate - API calculates data to be returned based on available datapoints and passed parameters. Calculation methods:
When performing the reduction, API always takes trends if they are available, falling back to the history if not.
If user requests history data, but for that period only trend data is available, does it make sense to return trend data instead, identifying the fact (such a fallback controllable by a flag) ? Reasoning - if API client just wants any data for a period with whatever is the highest precision, that would reduce amount of calls required.
Generated image retrieval
API should support returning generated image like a graph or network map based on received data. There would be two operational modes.
Format? base64 encoded? Any header, mimetype?
Basic generated image retrieval
Using this mode, API client would request an existing simple or custom graph, bar report or a network map.
Runtime-configured image retrieval
Using this mode, API client would request a graph, report or map and pass parameters to generate it. For example, It could pass CPU load and webserver request items along with all item configuration and get back a generated image, or pass map parameters and all element data. There is no need to explicitly support simple graphs in this mode, that is just a graph with single item.
What about "temporary entities" where API client could specify entity parameters, then get back an ID and get the image in further calls by ID; when session ends, the temporary entity is destroyed. Using this method API client could even specify regeneration time period, and API would periodically regenerate the image. When the client would next request it, the pre-generated (or "cached") image would be returned, not the last one. This would allow an API client to have rolling graph, and never wait for the graph to be generated when connecting, and it would also reduce the network traffic.