Our API chooses from a set of options, and learns which option leads to the most success.
There are two basic steps:
You can wire these two steps into any web or mobile app.
The result is that your app "learns" what works most often, and favors the "best" option most of the time. We can cross-reference this learning with "targeting" data like geo-location or user segment, because the best option for one type of visitor might not be the best for another.
If you use jQuery, you should also check out our jQuery Plugin on GitHub. It's a convenient way to use the API described here.
If you develop for iOS, check out our iOS Wrapper on GitHub. It makes it easy to wire this API into your app.
Cute little success trackers that love to serve.
We use the term 'agent' to refer to a learning project. You can create as many agents as you wish.
The idea is that each agent learns about a different set of options. So, you might have one agent trying out different content options, and another trying out layout options or whether to display some special feature.
So, each agent has a list of options that it will try out for each person that uses your site or app. It also tracks goals (or lack thereof) achieved by each person after an option is tried out.
Each agent continually updates its internal statistics about how often each option has lead to success over time, which means it always knows the 'best' option to show to a new person.
To create an agent, just make up a code for it and start using that code in our API calls, as shown in this document. We're using agent-1 here, but you can make up whatever code makes sense for you (using letters, numbers, or hyphens).
This is the key method in our Learning API.
Remember that an agent cares about tracking success for each option in a set of options.
Whenever one of your visitors or users encounters a spot in your site or app where one of those options should appear, you use this method to get a 'decision' from your agent.
To get a decision from an agent, just GET from its decision URL:
GET http://api.conductrics.com/undefined/agent-1/decision?apikey=12345
Usually, your agent will choose the 'best' option, if it has enough data on hand to do so. Otherwise, it will choose one at random and track whether it leads to a goal later on.
In either case, the response from your agent looks like this. The most important part is the 'decision' part, which tells you which selection your agent made:
{ "session":"12345", "decision":"b" }
It's now up to you to actually display that option to the person using your site or app.
The example above seems to imply that the available options are 'a' and 'b'. Where did those come from?
The options are defined by URL you use when you first get a decision from your agent.
GET http://api.conductrics.com/undefined/agent-1/decision
GET http://api.conductrics.com/undefined/agent-1/decision/rock,paper,scissors
GET http://api.conductrics.com/undefined/agent-1/decision/4
More details about the parameters you can provide to these decision calls are available in the Learning API Reference.
Great. Your agent is making decisions for you and you're showing the appropriate content or functionality to your users.
The only thing that's left is to let your agent know when a goal is achieved, so your agent can 'credit' the option it selected as having led to success.
To reward your agent, just POST to its goal URL, like so:
POST http://api.conductrics.com/undefined/agent-1/goal
The response from us looks like the following. You generally don't need to do anything with the response (conceptually just "fire and forget"):
{ "session":"12345", "reward":"1" }
If you want, you can also provide the 'value' of the goal, as perceived by you or your company. For instance, in an e-commerce scenario, the dollar amount of a checkout-type event can usually be thought of as the value of the goal. Just add a reward parameter, like so:
POST http://api.conductrics.com/undefined/agent-1/goal?reward=19.99
Now the agent's learning and reporting will reflect the reward amounts, rather than only considering the number of goal events. Details about the reward parameter and some other options are available in the Learning API Reference.
You've probably noticed the 'session' identifier in the example responses above.
When a goal occurs, your agent needs to know what decision was made before it. That's how it learns.
In order to connect those dots, we need a session identifier that represents each of your user's 'sessions'.
We can make the session identifier up for you, or you can pass it to us:
For details, see the More About Sessions page.
The simple examples you've seen so far assume that you only want to make one decision (that is, your agent should only make one selection at a time from a single list of options).
You may want to create sites or apps that:
Conductrics has comprehensive support for each of these scenarios. See Multi-Faceted Agents for details.
There's more to the API than what we've covered here. For instance:
Sometimes you might need to let us know that a session has expired or left your app, which you can do like so:
GET http://api.conductrics.com/undefined/agent-1/expire
To get a JSON representation of your agent:
GET http://api.conductrics.com/undefined/agent-1
To get a list of agent codes:
GET http://api.conductrics.com/undefined/list-agents
You can get some basic data about what your agent learned during March 2012 like this:
GET http://api.conductrics.com/undefined/agent-1/report/learned-values/2012-03-01/2012-03-31