Just a stranger trying things.

  • 2 Posts
  • 116 Comments
Joined 1 year ago
cake
Cake day: July 16th, 2023

help-circle







  • I didn’t say it can’t. But I’m not sure how well it is optimized for it. From my initial testing it queues queries and submits them one after another to the model, I have not seen it batch compute the queries, but maybe it’s a setup thing on my side. vLLM on the other hand is designed specifically for the multi co current user use case and has multiple optimizations for it.


  • The Hobbyist@lemmy.ziptoSelfhosted@lemmy.worldSelf-hosting LLMs
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    10 days ago

    I run the Mistral-Nemo(12B) and Mistral-Small (22B) on my GPU and they are pretty code. As others have said, the GPU memory is one of the most limiting factors. 8B models are decent, 15-25B models are good and 70B+ models are excellent (solely based on my own experience). Go for q4_K models, as they will run many times faster than higher quantization with little performance degradation. They typically come in S (Small), M (Medium) and (Large) and take the largest which fits in your GPU memory. If you go below q4, you may see more severe and noticeable performance degradation.

    If you need to serve only one user at the time, ollama +Webui works great. If you need multiple users at the same time, check out vLLM.

    Edit: I’m simplifying it very much, but hopefully should it is simple and actionable as a starting point. I’ve also seen great stuff from Gemma2-27B

    Edit2: added links

    Edit3: a decent GPU regarding bang for buck IMO is the RTX 3060 with 12GB. It may be available on the used market for a decent price and offers a good amount of VRAM and GPU performance for the cost. I would like to propose AMD GPUs as they offer much more GPU mem for their price but they are not all as supported with ROCm and I’m not sure about the compatibility for these tools, so perhaps others can chime in.

    Edit4: you can also use openwebui with vscode with the continue.dev extension such that you can have a copilot type LLM in your editor.









  • From what I understand, bsky’s architecture seems to allow federation at multiple levels. On one side the individual profiles are actually websites and the app aggregates the content almost as an RSS reader. I do see some profiles which are independent like Jeff Gierling’s, so yes federation at the profile level seems to work.

    And this is really important because it is one way to prevent your data from being hostage by the service. Then there is another level of federation. I’m not entirely sure of the terminology here, but there is one aggregator aspect, which is quite compute intensive. And that one I don’t know if there is another instance of it. But functionally speaking, I’m quite impressed by the technical aspect of bsky. There has been a lot of thought put into it.

    And monetizing it is not the issue, the problem is mostly how. That they have some paid features is fine, it’s even important that there are ways to monetize it without milking their users of their privacy.

    Let’s hope this works out and becomes sustainable while respecting the users!


  • I’m following bsky’s progress. They have shared things which on a technical standpoint and from a social network empowerment perspective are very interesting. The portability of the profiles and the fully custom moderation layers are particularly noteworthy and seem to go far beyond what I’ve seen in other social networks. Even in mastodon apparently it is not possible to port a profile from one instance to another without losing all your post history (ente.io tried this recently and got caught by this). And for moderation, you have to rely on your instance moderation rather than personalized one. And the annotation part of bsky is also interesting to me.