• 0 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: October 25th, 2023

help-circle







  • First of all, yes, everyone using older technology “has bullets coming at them”, you clearly know as well as I do about wear and tear in electronic components so I don’t know why the tone of your reply implies that older hardware will run fine forever as long as “nothing is wrong”. It’s a balancing act; you can’t know if something’s gonna go wrong with your hardware until it does, but failure rates go up the older it gets, plain and simple.

    Secondly, yes, you completely misunderstood what I said. I upgraded before anything went wrong, for gaming and local AI primarily, and because I wanted to avoid the tariffs I knew were coming. I repurposed my old system in its entirety into a server, and not even a few months later it BSOD’d with the fatal hardware error. I know it can be a transient error, I said it’s generally not, because that’s my experience in the absence of overheating/overvolting. I had not overclocked at all, I don’t feel like risking it for a few extra percent performance when I’m running a system for long-term stability.

    Finally, I think you’ll find that I not only didn’t recommend an upgrade and just parted out an upgrade kit at what most would consider a reasonable price these days, but that I ALSO labeled my experience as an anecdote. Meanwhile, you gave your anecdote like it shows I’m an asshole or an idiot, or both, for upgrading when my PC wasn’t on literal fire. Fuck me for trying to help a buddy out on the Internet, I guess.







  • Okay no worries, I’d at least try llama cpp just to see how fast it is and to verify it works. If it doesn’t work or only works once and then quits, maybe the problem is LMStudio. In that case you might want to try GPT4All (https://www.nomic.ai/gpt4all); this is the one I started with way back in the day.

    If you care enough to post the logs from LMStudio after it crashes I’m happy to take a look for you and see if I can see the issue, as well 🙌



  • From that thread, switching runtimes in LMStudio might help. On Windows the shortcut is apparently Ctrl+shift+R. There are three main kinds: Vulkan, CUDA, and CPU. Vulkan is an AMD thing; CUDA is an nVidia thing; and CPU is a backup to use when the other two aren’t working for it is sssslllllooooooowwwwwww.

    In the thread one of the posters said they got it running on CUDA, and I imagine that would work well for you since it’s an nVidia chip; or, if it’s already using CUDA try llama.cpp or Vulkan.


  • Or just accept that not everyone will be having a secure conversation every time at first, but more will be secured as more and more people like me convince our family members to use it and eventually we transition everyone away from SMS?

    No, of course not, why would we build a critical mass of users like that?

    Since they removed SMS support almost my entire family and my friends uninstalled signal, except a few who keep it to talk to me, and my half dozen friends privacy-conscious enough to care. Dozens of people, down to eight if you don’t count me, in my circles alone. Objectively, removing SMS support harmed Signal’s popularity and made everyone less secure. The argument for why they did it was at best myopic and also, in my opinion, utter bullshit.




  • No problem - and, that’s not thorough, that’s the cut down version haha!

    Yeah, that hardware’s a little old so the token generation might be slow-ish (your RAM speed will make a big difference, so make sure you have the fastest RAM the system will support), but you should be able to run smaller models without issue 😊 Glad to help, I hope you manage to get something up and running!