It will obviously look different, but with binfmt, wine, and a sane initial setup, you could get a lot of .exe works from a click in the UI (or the CLI, after all, CLI apps exists on windows too).
It will obviously look different, but with binfmt, wine, and a sane initial setup, you could get a lot of .exe works from a click in the UI (or the CLI, after all, CLI apps exists on windows too).
You could improve your reading comprehension.
but ignore the long history of vulnerabilities, bugs, and cursed workarounds present in X.Org
You’re not wrong on the other points, but that one… you’d also have to ignore the things that got fixed in X.Org, and the things that will show up in the various wayland implementations that were fixed previously. That’s the thing when doing things from scratch, old issues shows up sometimes.
Those figures are larger than the total storage usage on my work computer, with every tools installed and repositories cloned locally. I know that large storage are way more accessible, but it still sounds crazy to take so much space.
The only way I can go over that is by installing npm dependencies in every source tree, which is also a thing that really should be improved.
Until it doesn’t work. There’s a lot of subtlety, and at some point you’ll have to match what the OS provide. Even containers are not “run absolutely anywhere” but “run mostly anywhere”.
That doesn’t change the point, of course; software that are dependent on the actual kernel/low level library to provide something will be hard to get working in unexpected situations anyway, but the “silver bullet” argument irks me.
No, they hate flatpak, one of the many option to distribute software, which is not the only one even if you consider the “must run on many distro” restriction (which isn’t 100% true, kinda like the Java write once run anywhere). There are other options, some more involved, some simpler, to do so.
They didn’t say they hate devs, that’s on you, grabbing a febble occasion to tell someone that voiced his opinion to “fuck off”.
Yes. It’s flooding places, and suddenly people decided that “smooth looking” was the absolute end goal of any drawing/music/creation/etc. It’s not. Some of the most famous art piece are completely wrong, some aren’t. That’s not the endgoal. Nobody’s gonna care that you can take that very simplified drawing and “generate” an extremely high-detail, fully shaded image that looks like it, as it was never the purpose.
Creative direction, intent, consistency (or absolute lack of consistency), execution, style, and a lot more goes into any creation, art or not. That’s what make a piece feel interesting. There’s a reason even now, with generated content being plausible as far as glaring mistakes go, we can still point out which image “feels” AI across a lot of different styles. At best, to remove that feeling of it being wrong, you’d have to spent a lot of time on the output of a model to touch it up everywhere and change details, which requires time and proficiency, which a lot of people jumping on that trend definitely lacks. Some of the worst results I’ve seen have been from people trying to make other “pay” for their output.
There’s also the issue of how these works. For decades, creative people (among other) have been sued by big companies, some very harshly, to protect IP from such overexploitation as “using a three second excerpt in a video” or “using the vague likeness of a character”. And now, these same targets are getting fleeced of their work by more big companies under the cheer of the people. That’s a gut feeling of disgust right there. Combined with the utter lack of creativity in these, we’re really watching the potential death of an activity (artistic creation), and that’s not a good place to be. If one wants to argue that “generated art” is also a form of creation, keep in mind that these models can’t be trained on generated pieces without extreme prejudice. Killing the very source they need to operate does not seem like a good long-term plan. But who cares about long-term when you can make a quick buck, right?
I’d also like to point out that all this rambling is about generated content that goes from “output of a model” to “final piece” with little to no afterthought. The “common” piece, where people will be happy to see twenty broken pieces because “well, there’s a lot of them, so it’s good”. AI and LLM models, as a tool, may or may not be useful in the long term, but I can see smaller applications, even for art. A lot of menial tasks can be improved, general posing, references, simple background that are marginally considered part of the product, guides, etc. Taking something you’ve drawn/created, and locally use an AI “filter” to remove an extra line cleanly or touch up a mistake you want out? Great. The tool carries the intent of the artist, the same way a pen do.
But AI generated content? Make a prompt, a stick-figure sketch, and call it a day? These, IMO, will always look and taste like garbage, no matter how pretty they look. Because it was never “pretty” we were looking for.
some good stuff
If you want to live in medieval time with your wife/servant, sure.
He does a lot of things, in particular layer positioning/whatever this is called. I can’t really compare with PS though, since I don’t have it, but to open and do basic stuff on complex psd files that other software do not handle well, it’s ok.
No idea how large you can get with it though.
I’d be very curious to know how much cheaper it is. Sure, there’s R&D to integrate that with everything, but that cost is split across all units sold. It feels like the actual sensors, at this scale, can’t add a significant amount to the final price.
Can’t wait for /r/outoftheloop post asking “trump exposes tesla cars in the white house, what does it mean?”
Because he, and his base, are happy with them.
First, mostly as if in Firefox. Go open Netflix, just for the laugh of it.
Second, a fork that depends on Mozilla’s power to develop the upstream is not really in the clear. From a licensing perspective, sure. But let’s assume the worst (because it’s 2025 after all). Firefox is no longer open source. Sure, we can fork from where they left. But building, maintaining, and evolving a browser engine (and the browser itself) requires substantial work. Which means, developers/maintainers, and money. And staying on a “bare” browser might not be viable as long as standards keeps evolving and 95% of people will not care about that stuff.
All that to say, a fork is an option for now. A more tangible solution for the future is needed. A new “Mozilla” without the $millions CEO and structure, Mozilla splitting Firefox into a clean base and a commercial product, something else. But not a fork that just follow Firefox source.
We notice. They’re not hiding. The (numerous) endpoints are all presents in the about:config page. The actual content, though, is not that obvious to get. If we assume the binaries are compromised (I don’t believe they are for now, for the record), an outsider would only see a TLS session. At best we could get the vague amount of data exfiltrated, not really the content. But that’s hypothetical. For now.
how are they supposed to “sell your data”
First step is collecting it. Putting provisions to grab everything from the software you installed on your device and use to do everything is a good start. Second step is selling it. Data broker loves data, surprisingly. And even small, inconsequential stuff can go a long way when you can correlate with dozens, or hundreds, of data points.
if you just never use a Mozilla account
Given how it’s implemented, the data pushed inside your account may be in a safer place than what you use the browser to do daily at this point.
and uncheck all the telemetry
Funny thing. Even with everything unchecked/disabled/toggled off/whatever, there’s a handful of ping back and other small reports that are configured to go out. You can turn these off using the complete config page; the one that warns people that its dangerous and have no clear way to know what most of its options do.
Its not like they can secretly steal your data, since its Open Source
If by “secretly” you mean without us knowing, it would be hard indeed, as long as people did look into the source AND the built images were faithful to the source, too. They are not doing it secretly, at least for now, anyway. That’s the point of their “privacy notice” that includes basically everything, which they then use as a safeguard saying "we can’t do shit (unless specified in the privacy notice).
It seems to me like just more FUD that Google is spreading to undermine our trust in free software
The policy changes comes from Mozilla. Were written, published, and updated by Mozilla, on their blog (and legal pages). What the fuck are you talking about with Google?
Heck, if you knew 2cts about this, Google actually low-key needs Firefox to exists as a counterpoint to Chrome’s hegemony, unless they want another trial for being too good at their job.
There are books for that, that usually take all the important bits and put them in funny, engaging ways. It could be a nice thing to get, even read together.
And it works with the same license key too.
Back when winrar existed and 7-zip didn’t?
Opensource software is free most of the time. It doesn’t prevent people from paying for it, too.
WinRAR does a job, you need it, you can pay for it, you pay for it. Not everything is about racking the maximum amount of money while avoiding every single expense, no matter how petty. These days there’s certainly competition, but you should remember that it’s been around for a long time.
Eh. I’m mostly a power user, all day at work in terminals and keyboard shortcut galore.
It doesn’t prevent me from laying back and running a “filthy casual” kubuntu with little to no setup at all. At one point you reach the state where you just want to use your computer, not tinker with it all the time.