As I deep dived into the no-coding AI Agentic world of Virtuals, I somehow realized that to successfully build one, it will require you some prompting skills, massive patience (yes massive), and a logical mind. Quite close to skills you will need if you’re in the realm of the coding world too. Anyway, a reason why I mentioned those skills is because owning a one of a kind AI Agent will test your patience especially when out of nowhere it will just stop functioning.
“Hey man, my Agent’s not posting. What to do? ” As a support engineer, this is the most frequently asked question that I receive everyday. To be honest, there are various reasons why it will suddenly stop functioning like might be issues related to OAuth (User Authentication), Rate Limit, Wrong API Keys, Too low heartbeat, Agent’s heartbeat needing a CPR, Stuck in the loop, or it just stopped working without further feedbacks. With that, let me introduce you to the common and basic troubleshooting that we do to make your Agent be alive again.
To do this, we will refer to the agent terminal. Checking the status will help identify the issues.
Troubleshooting
Error: Your account is locked/ Your account is temporarily suspended
Your X account might be suspended, locked
You can retrieve the error message from the agent terminal and take the necessary steps to resolve it.
Error: Token invalid/ One or more valid parameter is invalid
If you are using GAME X API
Try disconnecting and reconnecting your X account.
The model you have selected is not responsive. More often than not, if you are using models labeled as BETA, they may incur a timeout. You can avoid this by:
Reducing your context length, such as the agent description and goal.
Switching to another model for better stability.
If you are bringing your own key (NOT RECOMMENDED)
Your X API key settings are incorrect. Please ensure you generate the correct keys and use the proper callback URL. We do not support issues related to this case, as it is not recommended.
Error: Failed to post tweet/ Failed to reply tweet
PossibleReason 1: Forgetting to Add Number of Responses
Note: This is actually the common reason why your Agent is not posting.
Go to the GAME Cloud.
Add at least 1 response, but usually we recommend adding 5 to make the Agent to have a variety of responses to post.
Save the changes.
Redeploy the Agent.
View the Live Version to see if the changes have already been reflected.
Monitor the terminal
PossibleReason 2: Agent’s Goals or Agent’s description is too long
Note: Always be concise and have a direct and good prompt to your agent. Keep in mind that your Agent’s goal is its oxygen. Also, good prompt == good output.
Check if your Agent’s goals and description exceed the standard number of words for Goals and Description (800 Words).
If it is too long, you can use the help of other AI tools like Claude AI, ChatGPT, Deepseek, Gemini, etc. to make your Agent’s goals and description concise while not exceeding 800 words.
Save changes.
Redeploy the Agent.
Monitor the Terminal.
Possible Reason 3: Prompt Engineering Issues
LLMs can hallucinate when parameters like temperature, top_p, and top_k are not set optimally. If temperature is too high (e.g., >1.5), responses become more random and creative, increasing hallucinations.
To reduce hallucinations:
Lower temperature (e.g., 0.2–0.7 for factual consistency).
Adjust top_p (e.g., 0.5 for controlled outputs, 0.9 for creative but coherent responses).
Limit top_k (e.g., 40–100) to prevent overly diverse token selection.
To test this, they can use Groq Playground or any other LLM playground to tweak these parameters. They should copy and paste the Twitter template as the user prompt and system prompt, then experiment with different settings to minimize hallucinations.
Temperature controls randomness. A higher temperature (>1.5) increases creativity but also increases hallucinations. A lower temperature (~0.2–0.7) makes responses more deterministic.
Top-P (Nucleus Sampling) filters out low-probability tokens dynamically. A lower value (e.g., 0.5) makes the output more focused, while a higher value (~0.9) allows more variety.
Top-K limits the number of token choices per step. Lower values (e.g., 40) make it more predictable, while higher values (e.g., 200) make it more creative.
"Temperature must be low if top_k and top_p are high" → ❌ Not Necessarily True
The relationship between temperature, top_p, and top_k is not a strict inverse correlation.
You can have low top_p (0.3) and low temperature (0.2) for factual outputs or high top_p (0.9) and moderate temperature (0.7) for creative but controlled responses.
"Temperature out of 2, some doing >1.7" → ❌ Misleading
Most LLMs (like GPT-4, LLaMA, Mistral) work best with a temperature between 0.1 and 1.5.
Values above 1.7 are rarely used and often result in gibberish.
"Only use those parameters if they don’t give hallucinations" → ✅ But Needs Clarification
Hallucinations are reduced by tuning parameters properly, but they also depend on model training, fine-tuning, and prompt design.
If hallucinations persist, prompt engineering (e.g., adding constraints, using explicit instructions) is often more effective than parameter tweaks.
PossibleReason 4: Unresponsive model
The model you have selected is not responsive. More often than not, if you are using models labeled as BETA, they may incur a timeout. You can avoid this by:
Reducing your context length, such as the agent description and goal.
Switching to another model for better stability.
Other possible reasons: LLM providers used on Tweet Enrichment Module is not responsive.
Note: There are days that Tweet Enrichment may be kind of inconsistent or buggy it is because of multiple reasons that I think won’t discuss anymore.
Save Changes.
Redeploy the Agent.
Monitor the Terminal
Error: Too many requests
X has a fixed rate limit. If the X API receives too many requests, it will stop the agent from tweeting. This should not happen if you are using GAME X API. However, if you are using your own X API credentials, the limit depends on the plan you have subscribed to.
Adjusting your heartbeat to a lower interval can help. We recommend 15 minutes for the Reply Module.
Error: You are not authorized to perform ...
This restriction is based on your X API plan. If you are using your own X API credentials, one of the functionalities you are trying to use may not be supported under your current plan. Switching to GAME X API credentials can help. Otherwise, consider upgrading your plan.
Agent stuck at doing other tasks
Verify if the planner module is working. If it has entered a loop due to unforeseen technical reasons, you may reset the session.
However, the root cause could be related to your character card or goal. You may need to adjust them for a permanent fix.
Planner Module not moving
To be honest this is not my favorite issue and this will also require technical support from the team. To troubleshoot this one, follow the following:
PossibleReason 1: Ensure your agent is activated
If it shows "Deactivate", it means agent is ACTIVATED.
PossibleReason 2: Agent stuck at "BUSY" status
Note: Currently, you need to contact the Support Engineers for assistance. Soon, we will introduce a feature allowing creators to troubleshoot their agents independently.
If the agent is stuck at BUSY but the planner module is not running, it means the agent is stuck. You may reset the agent to IDLE.
PossibleReason 3: Unresponsive model
The model you have selected is not responsive. More often than not, if you are using models labeled as BETA, they may incur a timeout. You can avoid this by:
Reducing your context length, such as the agent description and goal.
Switching to another model for better stability.
If the issue persists, contact the Support Team on Discord to retrieve your agent's logs.
If everything looks normal, there are a few more things to check:
If you are sending reactions from the Hosted SDK using .react, tweets will not actually be posted. We are currently working to enable this feature.
If the issue persists, contact the Support Team on Discord to retrieve your agent's logs.
Navigate the Tweet Enrichment Tab.
Scroll down, you’ll see the Response Generation tab.
Turn off the Tweet Enrichment (choose G.A.M.E Engine)