Faerie
ChatGPT's little sister
Loading...
What
- A chatbot with multiple chats, similar to ChatGPT
- Built on OpenAI APIs, which are way more reliable than ChatGPT itself
- Configurable - try different models/temperatures/prompts to find what works best for you
- Not free! Pay as you go, no subscription.
- Pay with Bitcoin Lightning
Why
- ChatGPT's delays/limits/outages make it increasingly unusable
- ChatGPT's algorithms and built-in prompts keep changing and you can't edit its high temperature setting, so it's hard to get consistent results
Advantages over ChatGPT
- Faster, more reliable, ~100% uptime, longer sessions, no aggressive rate limits
- Responses render all at once, without the slow letter-by-letter animation
- Option to use choose specific models and config options
- Most recent chats go to the top of the list
- Your code blocks (in ``` backticks) are formatted nicely same as in responses
- NEW: Can answer questions about GitHub repos, just paste the full repo link in your message
- This will deduct an above-average amount of tokens, perhaps a few thousand, as GPT will read your GitHub folder structure and any files needed to answer your question
- Right now this is only good at answering questions about one file at a time; can't (yet) reason holistically across files
Limitations
- Not as good as ChatGPT as intuiting your intent
- Its conversational memory is currently very simple, just including the last 10 messages
- Not optimized for mobile
- Conversations are saved to database in plaintext. Don't share sensitive information.
Configuration Settings
Experiment with these settings to find what works best for you. Settings apply only to the next message and can be changed anytime.
- model
- text-davinci-003 - The smartest text model, also good at coding. Fast. Max 4000 tokens. Use this most of the time.
- code-davinci-002 - Maybe better at coding. Slow - sometimes takes 30-60 seconds to respond. Max 8000 tokens. OpenAI describes it as, "Most capable Codex model. Particularly good at translating natural language to code. In addition to completing code, also supports inserting completions within code."
- temperature
- "What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer."
- maxTokens
- "The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096)."
- convoHistory
- Whether to include the last 10 messages in the conversation in the prompt. This is a simple way to make the bot remember what you've said. It's not very smart, but it's better than nothing.
Tips
- You can submit with Cmd+Enter
- That's Mac - on Windows it might be Ctrl+Enter
- To save tokens, disable convoHistory when you don't need it
Support
- To report bugs or make feature requests, tweet or DM @FaerieAI