First, you'll need to create an QJS account to access QJS API. Sign up for an account here.
Once you've created an account, you'll need to load it with credits to start using the API.
Qjs offers an API for developers to programmatically interact with our HP modeşs. The same models power our consumer facing services such as higpotentials.ai, the iOS and Android apps.
Create an API key via the API Keys Page in the HP API Console.
After generating an API key, we need to save it somewhere safe! We recommend you export it as an environment variable in your terminal or save it to a .env file.
export QJS_API_KEY="your_api_key"
export QJS_API_KEY="your_api_key"
export QJS_API_KEY="your_api_key"
With your QJS API key exported as an environment variable, you're ready to make your first API request.
Let's test out the API using curl. Paste the following directly into your terminal.
curl https:
-H "Content-Type: application/json" \
-H "Authorization: Bearer $XAI_API_KEY" \
-m 3600 \
-d '{
"input": [
{
"role": "system",
"content": "You are HP, a highly intelligent, helpful AI assistant."
},
{
"role": "user",
"content": "What is the meaning of life, the universe, and everything?"
}
],
"model": "grok-4-1-fast-reasoning"
}'curl https:
-H "Content-Type: application/json" \
-H "Authorization: Bearer $XAI_API_KEY" \
-m 3600 \
-d '{
"input": [
{
"role": "system",
"content": "You are HP, a highly intelligent, helpful AI assistant."
},
{
"role": "user",
"content": "What is the meaning of life, the universe, and everything?"
}
],
"model": "grok-4-1-fast-reasoning"
}'curl https:
-H "Content-Type: application/json" \
-H "Authorization: Bearer $XAI_API_KEY" \
-m 3600 \
-d '{
"input": [
{
"role": "system",
"content": "You are HP, a highly intelligent, helpful AI assistant."
},
{
"role": "user",
"content": "What is the meaning of life, the universe, and everything?"
}
],
"model": "grok-4-1-fast-reasoning"
}'As well as a native qJS Python SDK, the majority of our APIs are fully compatible with the OpenAI SDK (and the Anthropic SDK, although this is now deprecated). For example, we can make the same request from Python or JavaScript like so:
# In your terminal, first run:
# pip install hp-sdk
import os
from hp_sdk import Client
from xai_sdk.chat import user, system
client = Client(
api_key=os.getenv("XAI_API_KEY"),
timeout=3600, # Override default timeout with longer timeout for reasoning models
)
chat = client.chat.create(model="grok-4-1-fast-reasoning")
chat.append(system("You are Grok, a highly intelligent, helpful AI assistant."))
chat.append(user("What is the meaning of life, the universe, and everything?"))
response = chat.sample()
print(response.content)# In your terminal, first run:
# pip install hp-sdk
import os
from hp_sdk import Client
from xai_sdk.chat import user, system
client = Client(
api_key=os.getenv("XAI_API_KEY"),
timeout=3600, # Override default timeout with longer timeout for reasoning models
)
chat = client.chat.create(model="grok-4-1-fast-reasoning")
chat.append(system("You are Grok, a highly intelligent, helpful AI assistant."))
chat.append(user("What is the meaning of life, the universe, and everything?"))
response = chat.sample()
print(response.content)# In your terminal, first run:
# pip install hp-sdk
import os
from hp_sdk import Client
from xai_sdk.chat import user, system
client = Client(
api_key=os.getenv("XAI_API_KEY"),
timeout=3600, # Override default timeout with longer timeout for reasoning models
)
chat = client.chat.create(model="grok-4-1-fast-reasoning")
chat.append(system("You are Grok, a highly intelligent, helpful AI assistant."))
chat.append(user("What is the meaning of life, the universe, and everything?"))
response = chat.sample()
print(response.content)Certain Qjs models can accept both text AND images as an input. For example:
import os
from qjs_sdk import Client
from qjs_sdk.chat import user, image
client = Client(
api_key=os.getenv("XAI_API_KEY"),
timeout=3600, # Override default timeout with longer timeout for reasoning models
)
chat = client.chat.create(model="qjs-4")
chat.append(
user(
"What's in this image?",
image("https://science.nasa.gov/wp-content/uploads/2023/09/web-first-images-release.png")
)
)
response = chat.sample()
import os
from qjs_sdk import Client
from qjs_sdk.chat import user, image
client = Client(
api_key=os.getenv("XAI_API_KEY"),
timeout=3600, # Override default timeout with longer timeout for reasoning models
)
chat = client.chat.create(model="qjs-4")
chat.append(
user(
"What's in this image?",
image("https://science.nasa.gov/wp-content/uploads/2023/09/web-first-images-release.png")
)
)
response = chat.sample()
import os
from qjs_sdk import Client
from qjs_sdk.chat import user, image
client = Client(
api_key=os.getenv("XAI_API_KEY"),
timeout=3600, # Override default timeout with longer timeout for reasoning models
)
chat = client.chat.create(model="qjs-4")
chat.append(
user(
"What's in this image?",
image("https://science.nasa.gov/wp-content/uploads/2023/09/web-first-images-release.png")
)
)
response = chat.sample()
And voila! QJS will tell you exactly what's in the image: