maroonblazer 10 days ago

Just tried this with an interpersonal situation I'm going through. The default seems to be Claude 3.5 Sonnet and ChatGPT-4o. I got the results I've come to expect from those two, with the latter better at non-programming kinds of prompts.

The app presented the option of prompting additional models, including Gemini Flash 2.0, one I'd never used before. It gave the best response and was surprisingly good.

Curious to know how Chorus is paying for the compute, as I was expecting to have to use my own API keys.

  • benatkin 9 days ago

    Some throttling plus having a limited number of users by it being a desktop app perhaps.

    I just checked to see if it was signed, without running it. It is. I don't care to take the risk of running it even if it's signed. If it were a web app I'd check it out.

    I don't know if there's any sort of login. With a login, they could throttle based on that. Without a login, it looks like they could use this to check if it's being used by an Apple computer. https://developer.apple.com/documentation/devicecheck/valida...

    • sunnybeetroot 9 days ago

      DeviceCheck is not available for macOS apps, see the following documentation: https://developer.apple.com/documentation/devicecheck/dcappa...

      • benatkin 9 days ago

        I see. I had checked attestKey and it says "Mac Catalyst 14.0+ | macOS 11.0+" among others, but that just means the API is present. developer.apple.com/documentation/devicecheck/dcappattestservice/attestkey(_:clientdatahash:completionhandler:)

  • owenpalmer 10 days ago

    Do they have really strict rate limits? How much did you use it?

Charlieholtz 9 days ago

Hi! One of the creators of Chorus here. Really cool to hear how everyone is using it. We made this as an experiment because it felt silly to constantly be switching between the ChatGPT, Claude, and LMStudio desktop app. It's also nice to be able to run models with custom system prompts in one place (I have Claude with a summary of how CBT works that I find pretty helpful).

It's a Tauri 2.0 desktop app (not Electron!), so it is using the Mac's native browser view and a Rust backend. It also makes DMG size relatively small (~25mb but we can get it much smaller once we get rid of some bloat).

Right now Chorus is proxying API calls to our server, so it's free to use. We didn't add bring-your-own-api key to this version because it was a bit quicker to ship. This was kind of an experimental winter break project, so didn't think too hard about it. Likely will have to fix that (and add bring your own key? or a paid version?) as more of you use it :)

Definitely planning on adding support for local models too. Happy to answer any other questions, and any feedback is super helpful (and motivating!) for us.

UPDATE: Just added the option to bring your own API keys! It should be rolling out over the next hour or so.

  • d4rkp4ttern 9 days ago

    Curious to check it out but a quick question — does it have autocomplete (GitHub copilot-style) in the chat window. IMO one of the biggest missing feature in most chat apps is autocomplete. Typing messages in these chat apps quickly becomes tedious and autocompletions help a lot with this. I’m regularly shocked that it’s almost year 3 of LLMs (depending on how you count) and none of the big vendors have thought of adding this feature.

    Another mind-numbingly obvious feature — hitting enter should just create a new-line. And cmd-enter should submit. Or at least have it configurable for this.

    (EDITED for clarity)

    • akshayKMR 5 days ago

      I don't think this would be good UX. Maybe when you've already typed ~20 chars or so. If it was so good at prediction from first keystroke, it'd had that info you're asking in the previous response. It could also work for short commands like "expand", "make it concise", but I can also see it being distracting for incorrect prediction.

      > Typing messages in these chat apps quickly becomes tedious and autocompletions help a lot with this.

      If you're on Mac, you can use dictation. focus text-input, double-tap control key and just speak.

      • d4rkp4ttern 3 days ago

        In the editor there’s GitHub copilot autocomplete enabled in the chat assistant and it’s incredibly useful when I’m iterating with code generations.

        The autocomplete is so good that even for non-coding interactions I tend to just use the zed chat assistant panel (which can be configured to use different LLM via a drop down)

        More generally in multi-turn conversations with an LLM you’re often refining things that were said before, and a context-aware autocomplete is very useful. It should at least be configurable.

        Mac default Dictation is ok for non technical things but for anything code related it would suck, e.g if I’m referring to things like MyCustomClass etc

    • Charlieholtz 9 days ago

      Enter does continue the chat! And shift-enter for new line.

      My Mac now has built in copilot style completions (maybe only since upgrading to Sequoia?). They're not amazing but they're decent.

      https://support.apple.com/guide/mac-help/typing-suggestions-...

      • d4rkp4ttern 9 days ago

        Sorry I meant hitting enter should NOT submit the chat. It should continue taking my input. And when I’m ready to submit I’d like to hit cmd-enter

        • gazook89 9 days ago

          I agree, but only personally. I would assume most people are on the “Enter to submit” train nowadays.

          Most of my messaging happens on Discord or Element/matrix, and sometimes slack, where this is the norm. I don’t even think about Shift+Enter nowadays to do a carriage return.

    • hombre_fatal 9 days ago

      There are a lot of basic features missing from the flagship llm services/apps.

      Two or so years ago I built a localhost web app that lets me trivially fork convos, edit upstream messages (even bot messages), and generate an audio companion for each bot message so I can listen to it while on the move.

      I figured these features would quickly appear in ChatGPT’s interface but nope. Why can’t you fork or star/pin convos?

    • d4rkp4ttern 9 days ago

      The only editor I’ve seen that has both these features is Zed.

  • LorenDB 9 days ago

    If it's using Tauri, why is it Mac only?

    • Charlieholtz 9 days ago

      Only because I haven't tested it on Windows/Linux yet (started working on this last week!). But theoretically should be easy to package for other OS's

dcreater 9 days ago

Airtrain.ai and msty.app have had this for a while.

What isn't there and would be useful is to not have them side by side but rather swipable. When you're using for code comparisons even 2 gets stuffy

  • kmlx 9 days ago

    imo even more useful would be to have a single answer that represents a mix of all the other answers (with an option to see each individual answer etc)

    • sdesol 9 days ago

      I have that in the chat app that I am working on.

      https://beta.gitsense.com/?chat=51219672-9a37-442d-80a3-14d8...

      It provides a summary of all the responses and if you click on "Conversation" in the user message bubble, you can view all the LLM responses to the question of "How many r's in strawberry".

      You can fork the message as well and say create a single response based on all responses.

      Edit: The chatting capability has been disabled as I don't want to incur an unwanted bill.

solomatov 9 days ago

I would be much more likely to install this if it was published in the app store.

  • desireco42 9 days ago

    There are good reasons not to publish on app store ie. if you want to actually make any money from the app

    • solomatov 9 days ago

      My main concern is security and privacy. App store apps are sandboxed but manually installed apps usually are not.

    • solomatov 9 days ago

      If you are small, the app store looks to me as the easiest solution for selling apps.

    • swyx 9 days ago

      also if u have gone thru the hell that is publishing and signing mac apps

  • ripped_britches 9 days ago

    Most popular Mac apps like Spotify, vscode, are not

    • n2d4 9 days ago

      Because they're big enough so they can afford not to, and they want to do things that the sandbox/review process/monetisation rules wouldn't let them. I assume the sandbox is exactly why parent wants the app to be there

      • yuppiepuppie 9 days ago

        I would have thought the exact opposite to your statement, they are big enough that they should afford it. Seems like the ability to forgo the app store on mac allows apple to get away with stuff like high friction review process and monetization rules. Without the big players pushing back, why would they change?

        • KetoManx64 9 days ago

          Doesn't apple charge app store apps 30% all their transactions/subscriptions? What company in their right mind would want to opt into that if they don't have to?

          • solomatov 9 days ago

            A smaller to a medium sized company. Due to several reasons:

            - Setting up payment with a third party provider isn't that simple, and their fees are far from zero.

            - Getting users. Popular queries in Google are full of existing results, and getting into there isn't easy and isn't cheap. Also, search engines aren't the most popular way to get apps to your devices, usually people search directly in app stores. Apple takes care of it, i.e. I guess that popular app with good ratings get to higher position in search results.

            - Trust. I install apps on the computer without Apple only if I trust the supplier of the software (or have to have it there). Apple solves it with their sandboxing.

            Yep, 30% are a lot, but for these kinds of businesses it might be well worth it (especially with reduced commission of 15% for smaller revenue).

sharonbiren 2 days ago

Is it supposed to support Intel-based chips in the future? It cannot run on my Mac

mikae1 9 days ago

Was hoping this would be a LM Studio alternative (for local LLMs) with a friendlier UI. I think there's a genuine need for that.

It could make available only the LLLMs that your Mac is able to run.

Many Silicon owners are sitting on very able hardware without even knowing.

  • wkat4242 9 days ago

    I don't know LM Studio but I really like OpenWebUI. Maybe worth a try.

    I use it mainly because my LLM runs on a server, not my usual desktop.

    • lgas 9 days ago

      On that note, I recently learned from Simon Willson's blog that if you have uv installed, , you can try OpenWebUI via:

          uvx --python 3.11 open-webui serve
  • rubymamis 9 days ago

    This is exactly what I’m building at https://www.get-vox.com - it automatically detects your local models installed via Ollama.

    It is fast, native and cross-platform (built with Qt using C++ and QML).

nomilk 10 days ago

Love the idea. I frequently use ChatGPT (out of habit) and while it's generating, copy/paste the same prompt into claude and grok. This seems like a good way to save time.

sleno 10 days ago

very well designed! how does this work? in the sense that i didn't have to copy/paste any keys and yet this is offering paid models for free.

  • Charlieholtz 9 days ago

    Thanks! Right now Chorus is proxying API calls to our server so it's free. This was kind of an experimental winter break project that we were using internally, and it was quicker to ship this way.

    Likely going to add bring your own API keys (or a paid version) soon.

    Update: just added option to bring your own keys! Should be available within an hour.

  • swyx 9 days ago

    if you are not paying... you are the product

detente18 4 days ago

Your changelog is neat - is this custom built or via some embeddable tool?

kanodiaashu 9 days ago

This reminds me of the search engine aggregators in the old days that used to somehow install themselves on internet explorer and then collected search results from multiple providers and sometimes compared them. I wonder if this time these tools will persist.

wonderfuly 6 days ago

ChatHub is the first service to do this, and it's been around for almost two years, even before the release of the GPT-3.5 API.

rubymamis 9 days ago

If you're looking for a fast, native alternative for Windows, Linux (and macOS), you can join my new app waitlist: https://www.get-vox.com

  • KetoManx64 9 days ago

    Is it going to be open source?

    • rubymamis 9 days ago

      I'm not sure. I thought about setting up a funding goal, after which I'll open source it.

prmoustache 9 days ago

Or you can do that on your tmux terminal multiplexer using the synchronize-pane options.

A number of terminals can also do that natively (kitty comes to mind).

  • Lionga 9 days ago

    Dropbox ist just curlftpfs with SVN, in other words useless.

    • prmoustache 9 days ago

      I see what you did there.

      But the actual amount of effort to get to the level of dropbox in a multiple device context is a number of magnitude higher than the triviality of autoloading a handful of cli tool in different panes and synchronizing them in tmux.

      • prmoustache 9 days ago

        example here: https://forge.chapril.org/prmoustache/examples/src/branch/ma...

        Only 35 lines of code including empty lines and comments.

        That approach is also dead simple to maintain, multiplatform and more flexible:

        - separation of form and function: tmux handle the layout and sync, the individual tools handle the AI models.

        - I can use remote machines with SSH

paul7986 10 days ago

Cool and GPT/Claude think there are only 2 "r"s in strawberry?

Wow that's a bit scary (use GPT a lot) how bad a fail that is!

  • joshstrange 10 days ago

    I maintain that “2 ‘r’s” is a semi-valid answer. If a human is writing, pauses and looks up to ask that question they almost certainly want to hear “2”.

    • furyofantares 9 days ago

      A few days ago I was playing a trivia-ish game in which I was asked to spell "unlabeled", which I did. The questioner said I was wrong, that it "has two l's" (the U.K. spelling being "unlabelled"). I jokingly countered that I had spelled it with two l's, which she took to mean that I was claiming to have spelled it "unlabelled".

  • sdesol 9 days ago

    Here's more LLMs

    https://beta.gitsense.com/?chats=ba5f73ac-ad76-45c0-8237-57a...

    The left window contains all the models that were asked and the right window contains a summary of the LLM responses. GPT-4o mini got it right but the super majority got it wrong, which is scary.

    It wasn't until the LLM was asked to count out the R's that it acknowledges that GPT-4o mini was the only one that got it right.

    Edit: I've disabled chatting in the app, since I don't want to rack up a bill. Should have mentioned that.

  • lxgr 10 days ago

    Gell-Mann amnesia is powerful. Hope you extrapolate from that experience!

    At a technical level, they don't know because LLMs "think" (I'd really call it something more like "quickly associate" for any pre-o1 model and maybe beyond) in tokens, not letters, so unless their training data contains a representation of each token split into its constituent letters, they are literally incapable of "looking at a word". (I wouldn't be surprised if they'd fare better looking at a screenshot of the word!)

  • robwwilliams 10 days ago

    Today: Claude

    Let me count carefully: s-t-[r]-a-w-b-e-[r]-[r]-y

    There are 3 Rs in "strawberry".

    • e1g 9 days ago

      This app uses Claude over the API, and that "In the word "strawberry" there are 2 r's.". Claude web chat is correct, though.

      • sdesol 9 days ago

        I will not be surprised if Open AI, Claude, Meta and others use the feedback system to drive corrections. Basically, if we use the API, we may never get the best answer, but it could also be true that all feedback will be applied to future models.

cryptozeus 9 days ago

Thanks for simple landing page and most simple example anyone can understand.

ranguna 9 days ago

lmarena.ai is also pretty good. It's not mac exclusive, works from the browser and has a bunch of different AIs to choose from. It doesn't keep a history when you close the tab though

whatever1 9 days ago

Isn’t this cheating? What will the AI overlords think about this behavior once they take over things ?

sagarpatil 9 days ago

msty.app does this and much more. It’s open source too.