

Der8auer’s video is worth a watch, he got one of the Redditor’s card:
Der8auer’s video is worth a watch, he got one of the Redditor’s card:
Yes, so R&D and finalizing the model weight is done on NVIDIA GPUs (I guess you need an excessive amount of VRAM).
Inference is probably gonna be offloaded to consumers in the end where the NPU is taking care of the inference cost (See Apple, Qualcomm etc)
Not the best on AI/LLM terms, but I assume that training the models was done on Nvidia, while inference (using the model/getting the data from the model) is done on Huawei chips
To add: Training the model is a huge single-cost expense, while inference is a continuous expense.
Sounds like you might just be max’ing out the capacity of the coax cable as well (depending on length/signal integrity). E.g. ITGoat (not sure how trustworthy this webpage is, just an example) lists 1 Gbps as the maximum for coax while you would typically expecting less than that, again depending on your situation (cable length, material, etc)
What’s your situation into the wall? Depending on country/ISP/regulations they might give you up to 1000 Mbps under the assumption that it’s a single line going to a single user, however quite often that line is shared with potentially a lot of different customers.
Some countries allows you to buy packages where you have a standalone line going to your wall, however at an additional cost
If I remember correctly the default sudo timeout is set to 5 minutes on Yay, you should be able to increase it to something more reasonable
Maybe the LLMs they prompted didn’t know about the built-in SSH support, hence still recommends PuTTY? 🤔