The AI ToolKit's Chat Completion endpoint is used for generating contextual crypto responses using Messari's knowledge graph and specialized tools.
If you'd like to play around with the responses feel free to try on the Messari web-app, it uses the same underlying endpoint implementation.
The chat completion endpoint leverages a graph architecture with agents to:
- Access Messari's real-time quantitative (compute) dataset which includes but is not limited to:
- Market data, Asset metrics, Fundraising, Token unlocks
- Access Messari's qualitative (search) dataset which includes but is not limited to:
- News, Blogs, Youtube transcriptions, RSS-feeds, Twitter
- Webcrawl documents
- Proprietary datasets of: Research, Quarterlies, Diligence Reports
- Generate market insights and analysis
- Process natural language queries about crypto assets, protocols and projects
For a more interactive experience trying out the API, here is our Replit in Typescript. Simply populate theAPI_KEY
field with your Enterprise API key generated in the Messari Account > API page of our webapp and you're off!
Example Request Payload (Add API key to header)
{
"messages": [
{
"role": "system",
"content": "When discussing token models and tokenomics: Explain mechanics in clear, simple terms without jargon, focus on the relationship between tokens and their roles in the ecosystem, break down economic incentives and game theory"
},
{
"role": "user",
"content": "Tell me about Berachain's two token model and how the $BERA token works with $BGT"
}
],
"verbosity": "verbose",
"response_format": "markdown",
"inline_citations": true,
"stream": false
}
Request Params
verbosity
The verbosity
parameter controls the level of detail in the model's response. When set to "verbose"
, the model provides more comprehensive and detailed explanations, including additional context, examples, and supporting information. Other possible values might include "balanced"
or "succinct"
for shorter responses.
Example usage:
{
"verbosity": "verbose"
}
response_format
The response_format
parameter specifies the desired formatting style for the model's response. When set to "markdown"
, the output will be formatted using Markdown syntax, allowing for structured text with headings, lists, code blocks, and other formatting elements. Other common option might include "text"
.
Example usage:
{
"response_format": "markdown"
}
inline_citations
The inline_citations
parameter determines whether the response metadata should include citations within its response text. When set to true
, the model will reference sources directly in the text where information is being drawn from. This is particularly useful for academic, research, or factual content where attribution is important.
Example usage:
{
"inline_citations": true
}
stream
The stream
parameter controls whether the API response is delivered as a complete response or as a stream of partial responses. When set to false
, the API will wait until the entire response is generated and then deliver it in one piece. When set to true
, the API would begin sending partial responses as they are generated, useful for implementing real-time typing effects or processing responses incrementally.
Example usage:
{
"stream": false
}