

In principle, yes. It depends on your Linux distribution though, I’m not familiar with the one you’re using
In principle, yes. It depends on your Linux distribution though, I’m not familiar with the one you’re using
It’s asking for the ability to take screenshots, which is definitely suspicious unless there’s an in-app screenshot feature, and for the ability to launch discord and interact with it. The thing is it’ll be interacting using your discord account, I expect. That means it’ll be able to see your conversations and all the servers you’re in. It’ll also be able to post as you. Again, that’s the sort of thing which is very suspicious unless there’s some way in the app to have conversations over discord for some reason (maybe a bug report button, or a social feature).
Basically, I’d consider both of these alarming but not necessarily evidence that they’re spying on you to collect personal data or training data for an AI
Because printers (of the kinds you’re likely to find on the consumer market) don’t make dust in any significant quantity.
They make fumes, which are an entirely different kind of hazard and need different precautions
True, that should have occurred to me. That’s what I get for not touching a compiler since the Christmas holidays started
That’s easy. The 2038 problem is fixed by using 64-bit processors running 64-bit applications. Just about everything built in the last 15 years has already got the fix
Using that fix, the problem doesn’t come up again for about 300 billion years
No, I’m arguing that the extra complexity is something to avoid because it creates new attack surfaces, new opportunities for bugs, and is very unlikely to accurately deal with all of the edge cases.
Especially when you consider that the behaviour we have was established way before there even was a unicode standard which could have been applied, and when the alternative you want isn’t unambiguously better than what it does now.
“What is language” is a far more insightful question than you clearly intended, because our collective best answer to that question right now is the unicode standard, and even that’s not perfect. Making the very core of the filesystem have to deal with that is a can of worms which a competent engineer wouldn’t open without very good reason, and at best I’m seeing a weak and subjective reason here.
The reason, I suspect, is fundamentally because there’s no relationship between the uppercase and lowercase characters unless someone goes out of their way to create it. That requires that the filesystem contain knowledge of the alphabet, which might work if all you wanted was to handle ASCII in American English, but isn’t good for a system which needs to support the whole world.
In fact, the UNIX filesystem isn’t ASCII. It’s also not unicode. UNIX uses arbitrary byte strings, with special significance given to a very small number of bytes (just ‘/’ and ‘\0’, I think). That means people are free to label files in whatever way they like, and their terminals or other applications are free to render them in whatever way seems appropriate, without the filesystem having to understand unicode.
Adding case insensitivity would therefore actually be significant and unnecessary complexity to add to the filesystem drivers, and we’d probably take a big step backwards in support for other languages
Fair enough, I didn’t consider compute resources
The actual length of the password isn’t the problem. If they were “doing stuff right” then it would make no difference to them whether the password was 20 characters or 200, because once it was hashed both would be stored in the same amount of space.
The fact that they’ve specified a limit is strong evidence that they’renot doing it right
How did you calculate that? The question didn’t even mention a specific speed, just “near the speed of light”.
The kinetic energy for a grain of sand near the speed of light is somewhere between “quite a lot” and “literally infinity” (which is, in a sense, the reason you can’t actually reach light speed without a way to supply infinite energy).