Think of it like the flooded house analogy. They could paint the dry wall while tearing it out at the same time. But why would they?
Think of it like the flooded house analogy. They could paint the dry wall while tearing it out at the same time. But why would they?
They’re not refusing. They’re actually doing the opposite. But they needed to get their house in order first.
The 3.0 upgrade was the result of the getting their house in order and modernizing. Doing cosmetic changed before hand would have made no sense because those changes would have been thrown away when they would have to modernize things anyways.
I think I have an analogy.
Gimp was like an old American style wooden house that was flooded. After the water recedes you could try to make things look nicer by plastering and painting the walls etc. But as goes with flooded houses if you do this the mold will rot everything out.
In order to save a flooded house you need to remove all the dry wall and use fans to dry out the internals. Once things are dry then you can plaster and repaint things.
Gimp 3.0 was them ripping out dry wall and air drying the internals. Now that that is done it now makes sense to clean up the UI.
If you clean up the UI before you dry the walls out it’s just a waste of time because those improvements would need to be ripped out with the dry walls always.
It’s not perfect as far as an analogy goes but it’s close. Gimp should have never let the house flood in the first place. (Analogy breaks down here a bit). But since they did. They needed to fix the fundamental before it would be worth fixing the UI.
This all being said they could at this point genuinely refuse to change things UI wise. I hope they choose to pull a Blender or Krita but they don’t have to.
I mean the whole point of doing the mega rewrite to gtk3 was specifically to enable such forward looking progress.
What they did in the 3.0 release was, largely, a massive modernization of a dinosaur code base.
Now that it’s done it makes sense to do a UI overhaul. Before 3.0 it made no sense to even try, now it does.
Two reasons.
OpenSCAD is only lightweight till you start working with Minkowski transformations and hulls. Then you just want a faster CPU.
I think you should do more then just add a disclaimer. You should add a proper license. It protects you and allows others to build upon your work in a predictable way.
That and licenses are legally battle hardened and proven. A self written disclaimer is not.
I would recommend licensing your scripts under the GPL. This lets other people use it with the understanding that if they improve it they have to let others use the improvements too.
That and it protects you like you want. Particularly section 15 and 16.
- Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
- Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Alternatively you could use the MIT or BSD licenses, but they don’t have the “share alike” clause, so I tend not to recommend them.
But you make the choice. That’s the difference. Also if your company fails because you fire one client you have bigger issues.
You can argue that being your own boss with clients is the same as working a traditional job if you want but if you ask people who made the jump they’ll disagree.
Self employed person here.
Yes and No.
Yes clients can make demands like a traditional „Boss“ can, but at the end of the day I can fire my clients any time I want.
That makes makes a world of difference.
Helix + the appropriate set of LSPs.
It’s like neo vim without the need the manage plugins. That and it uses select -> action instead of vim style action -> select, which makes more sense to me.
It’s a hard problem in the fediverse. It makes for a ticking time bomb of an issue. Imagine I am on a “everything is your own, we don’t sell your stuff” instance while another instance just copy pasted metas ToS. By posting a response to my instance, which then in turn is federated to the meta style instance I create something hard to solve. I can foresee other issues too.
I see your point. I just think it’s a difficult problem.
I don’t think the ToS approach would be invalidated here via your Safe Harbor fork theory.
The ToS could state something like “you give us a worldwide perpetual right to use your content in any way we want including granting this right to whom we designate”
You still own your content but by having an account you agree to the ToS that lets them do what they want.
They just host it and are safe.
I don’t think it’s equivalent to sovereign citizens. OP is the author of their comment and therefore has the copyrights. As the author one can license their work as all rights reserved or other permissive licenses.
OP chooses to license their work as Creative Commons.
They’re not forcing you to accept the license, it’s your local government that enforces copyright.
The reason why this might work on Lemmy but not on corporate Social media is that corporate social media often have terms of service that require you to give them ownership/rights/etc. Lemmy has no such ToC.
It’s government reporting data. If you find a better source I say go for it. But I used that data for salary negotiations in the past successfully.
I’m not talking about take home. I’m talking about total annual compensation including things like RSU payouts etc.
Even if we throw out the ones you doubt there are many 300k to 400k entries with the AI researcher title. If we add annualized RSU payouts we easily hit over €500k.
At this point t though you are free to doubt me.
Maybe not with just if statements. But with a heuristic system I bet any site that runs a tar pit will be caught out very quickly.
When I worked in the U.S. I was well above $160k.
When you look at leaks you can see $500k or more for principal engineers. Look at valves lawsuit information. https://www.theverge.com/2024/7/13/24197477/valve-employs-few-hundred-people-payroll-redacted
Meta is paying $400k BASE for AI Reserch engineers with stock options on top which in my experience is an additional 300% - 600%. Vesting over 2 to 4 years. This is to H1B workers who traditionally are paid less.
Once you get to principal and staff level engineering positions compensation opens up a lot.
https://h1bdata.info/index.php?em=meta+platforms+inc&job=&city=&year=all+years
ROI does not matter when companies are telling investors that they might be first to AGI. Investors go crazy over this. At least they will until the AI bubble pops.
I support people resisting if they want by setting up tar pits. But it’s a hobby and isn’t really doing much.
The sheer amount of resources going into this is beyond what people think.
That and a competent engineer can probably write something on the BEAM VM that can handle a crap ton of parallel connections. 6 figure maybe? Being slow walked means low CPU use which means more green threads.
I see your point but like I think you underestimate the skill of coders. You make sure your timeout is inclusive of JavaScript run times. Maybe set a memory limit too. Like imagine you wanted to scrape the internet. You could solve all these tarpits. Any capable coder could. Now imagine a team of 20 of the best coders money can buy each paid 500.000€. They can certainly do the same.
Like I see the appeal of running a tar pit. But like I don’t see how they can “trap” anyone but script kiddies.
That’s crazy. Where do you live roughly? In Germany, and in the U.S. I don’t see any WEP stuff.
And like I can’t even imagine 802.11b/g even being considered to “work” with the modern internet. Like one mid bitrate 1080p stream would overwhelm it.
Fair. But I haven’t seen any anti-ai-scraper tarpits that do that. The ones I’ve seen mostly just pipe 10MB of /dev/urandom out there.
Also I assume that the programmers working at ai companies are not literally mentally deficient. They certainly would add .timeout(10)
or whatever to their scrapers. They probably have something more dynamic than that.
I see what you mean. But beyond modern encryption, nothing else really adds security there. A script kiddie is exactly the type that can change their MAC address or run aircrack-ng to get the hidden ssid.
I guess it’s like modern encryption is like running from a bear by flying in a high speed aircraft. The rest is like trying to add speed by blowing air out a back window through a straw. It might feel like you’re adding speed but functionality nothing is added.
Also I haven’t seen WEP encrypted networks is like 10 years. Do you see them often?
I don’t think they did. The bar is pretty low and still applies if you’re poor or rich.
No need to bring incell culture here.