Troubleshooting

Common issues and solutions to help you resolve problems quickly with Reqase Lite.

Quick Troubleshooting Tips

  • • Try refreshing the page after making configuration changes
  • • Check that you have the necessary permissions in Jira
  • • Verify all API keys and credentials are entered correctly
  • • Review the error message carefully for specific details

Installation & Setup Issues

I don't see the Reqase Lite panel on issues

Possible Causes:

  • • Plugin not enabled in Project Settings
  • • Insufficient permissions to view issue panels
  • • Reqase Lite not enabled on Issue View

Solutions:

1.

Ensure the plugin is enabled in Project Settings

2.

Check that you have permission to view issue panels in Jira

3.

Refresh the issue page after enabling the plugin

4.

Enable Reqase Lite on Issue View (can be done per ticket or set as default for all issues)

Zephyr Integration Issues

"Connection failed" error

Possible Causes:

  • • Incorrect Client ID or Client Secret
  • • API access not enabled in Zephyr
  • • Network connectivity issues
  • • Expired credentials

Solutions:

1.

Double-check your Client ID and Client Secret (copy-paste to avoid typos)

2.

Verify API access is enabled in your Zephyr account settings

3.

Try regenerating your API credentials in Zephyr

4.

Check your internet connection

5.

Disable VPN if you're using one

6.

Try a different browser

"Unauthorized" error after saving

Possible Causes:

  • • Credentials changed in Zephyr
  • • API key revoked
  • • Insufficient permissions

Solutions:

1.

Regenerate API credentials in Zephyr

2.

Update credentials in Reqase Lite

3.

Verify your Zephyr account has permission to create test cases

4.

Check if your Zephyr subscription is active

AI Generation Issues

Generation fails immediately

Possible Causes:

  • • Custom AI provider misconfigured
  • • API key invalid or expired
  • • Insufficient API credits
  • • Network issues

Solutions:

1.

Go to Settings → Custom AI Integration

2.

Disable all custom providers to use built-in AI

3.

If using custom provider:

  • • Verify API key is correct
  • • Check API credits/quota
  • • Test connection again
4.

Try generating fewer test cases (5 instead of 20)

Generation takes too long or hangs

Possible Causes:

  • • Generating too many test cases
  • • Complex requirements
  • • AI provider rate limits
  • • Network latency

Solutions:

1.

Click Stop Generating button

2.

Reduce test count (try 5-10 instead of Unlimited)

3.

Simplify custom instructions

4.

Wait a few minutes and try again (rate limit may reset)

5.

Check your internet connection speed

Generated test cases are low quality

Possible Causes:

  • • Insufficient context provided
  • • Vague custom instructions
  • • Wrong test type selected
  • • AI model limitations

Solutions:

1.

Add a detailed Project Description in Advanced Settings

2.

Select relevant Requirement Custom Fields

3.

Provide specific Custom Instructions

4.

Try a different AI provider or model

5.

Use Regenerate with AI to get new results

6.

Review and edit test cases before approving

Test cases are in wrong language

Possible Causes:

  • • Default language not set
  • • Custom instructions in different language

Solutions:

1.

Go to Settings → Custom AI Integration → Advanced Settings

2.

Select your preferred language from Default Language dropdown

3.

Ensure custom instructions are in the same language

4.

Regenerate test cases

Custom AI Provider Issues

OpenAI connection fails

Solutions:

1.

Verify API key from OpenAI Platform

2.

Check if you have available credits

3.

Ensure API key has correct permissions

4.

Try a different model (GPT-3.5 Turbo instead of GPT-4)

Google Gemini connection fails

Solutions:

1.

Verify API key from Google AI Studio

2.

Check if Gemini API is enabled for your project

3.

Verify you're within rate limits (60 requests/minute)

4.

Try Gemini 1.5 Flash instead of Pro

Custom LLM connection fails

Solutions:

1.

Verify the API URL is correct and accessible

2.

Test the endpoint with curl or Postman:

curl -X POST https://your-api-url/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model":"your-model","messages":[{"role":"user","content":"test"}]}'
3.

Ensure the endpoint supports OpenAI API format

4.

Check authentication headers are correct

5.

Review custom LLM documentation for compatibility

Still Need Help?

If you're still experiencing issues after trying these solutions, our support team is ready to assist you.

When contacting support, please include:

  • • The exact error message you're seeing
  • • Steps to reproduce the issue
  • • Your Jira Cloud instance URL
  • • Which test management system you're using