What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
Its Ruby, there is no way of creating classes that is anything other than dynamic creation at runtime. The class keyword creates classes no less dynamically than any other method, and assigns the created class to the constant name provided after the keyword. Using a class factory method like Data.define and assigning the result to a constant does the same thing as using the class keyword.
You are literally imagining a distinction that does not exist in Ruby.
> a rich British Black dude
I agree with you in general. But in these specific cases, there is a woman from Grand Rapids who went to state school [1] and a guy from Pomona who went to Caltech [2].
> But viral load from viruses that spread from human-to-human and live exclusively in humans has increased because there are more humans and they spread the diseases around the world more effectively than before.
This seems like pure conjecture to me, and without any actual evidence to support it, I'm disinclined to believe it. My guess is that the number of people any one individual interacts with has gone down considerably since the beginning of the century, and earlier. Before the advent of the car, shared forms of travel were much more common. People do much less forms of basic shopping than they used to given the rise of the Internet. Air travel, while much faster, is much less crowded that the ship travel of earlier generations.
I feel like containers and Kubernetes are microkernels revenge.
They are for all practical purposes fulfilling the same role.
How serious this vulnerability is depends entirely on how the site that's being attacked uses middleware. The auth thing is just the most obvious example of how an attacker can do bad things if they have the ability to selectively disable middleware by passing names as a colon separated list in an HTTP header.
(I've built sites that would have been affected by this in the past, had I used Next and middleware for auth. I've worked on plenty of systems where there are only a small set of users each with the same level of permissions - gating private documentation for example.)
Depends what you're doing. I have a content-based recommender that uses clustering for diversity, basically if you want N items and have k clusters it picks the top N/k out of each cluster. [1] k-means works just great for that.
[1] for better or worse though, if 1/k of your articles are about some topic like soccer or cryptocurrency you will always get about 1/k articles about soccer or cryptocurrency no matter how you vote.
Aren’t they notorious for not taking anything down?
Lately I’ve done a lot of asking Copilot to do things like write a Provider for a ContainerRequestContext in JAX-RS which turned out the same as if I wrote it myself but I would have spent more time looking up things in the documentation.
I had a bunch of accessibility-related tickets that were mostly about adding aria-attributes to React code involving mainly MUI components and got good answers off the bat with the code changes made but going back and forth with it we found other improvements to make to the markup beyond what was originally asked for.
Perplexity Pro suggested several portable car battery chargers, which led me to search online reviews, whose consensus (five or so review sites) highest-rated chargers were the first two on Perplexity's recommendation list. In other words, the AI was an helpful guide to focused deeper search.
Instead he would spend his day pairing with different teammates. With less experienced developers he would patiently let them drive whilst nudging them towards a solution.
That's not a management activity - that's the kind of coaching you would expect from a senior IC ("Individual Contributor" - I still hate that term.)
Generally I would expect a "manager" to have authority over other people in the company: run performance reviews, handle promotions and hiring and firing.
I think it's important for companies to provide a career structure that allows for influence, decision making and leadership roles that don't also require taking on those management tasks. Management tasks are extremely time consuming and require a substantially different set of skills from being a great team coach or force multiplier like Tim in the story.
You might as well have an “asynchrony budget” since each point of disorder (where events could happen in a different sequence) is a place where an error could happen and the trouble scales worse than O(N) where N is the number of asynchronous processes.
> I did find it curious though that the French guy himself never said it was for his messages about trump
I wouldn’t want to be the next public target for Musk and Trump, either.
6 months is way too infrequent. If last time you checked the state of AI was 6 months ago, you'd miss - among other things - NotebookLM's podcast generator, the rise of "reasoning models", Deepseek-R1 debacle, Claude 3.7 Sonnet and "deep research" - all of which are broadly useful to end-users.
I call that AI-assisted programming and I think it's a very appropriate way to use this tech - it's effectively a typing assistant, saving you on time spent entering code into a computer.
I called this the "authoritarian" approach here: https://simonwillison.net/2025/Mar/11/using-llms-for-code/#t...
The article clearly explains that the link isn't clear at all.
It's that certain damaging proteins are a line of defense against the HSV1 virus, that something sometimes sends those proteins into overdrive, that this is influenced by genetics broadly, further influenced by a particular gene, and that it's a second infection with shingles that can reactivate the proteins, worsening it.
Given that this is the interplay of something like at least 5 factors, and there may be more, it's not surprising it's taken this long to put together, even with all our statistical tools.
You're right about "living in the “overselling all up in your ears” epoch", but a good first defense against "being sold pffts as badabooms" is to blanket distrust all the marketing copy and whatever the salespeople say, and rely on your own understanding or experience. You may lose out on becoming an early adopter of some good things, but you'll also be spared wasting money on most garbage.
With that in mind, I still don't get the dismissal. LLMs are broadly accessible - ever since the first ChatGPT, anyone could easily get access to a SOTA LLM and evaluate it for free; even the limited number of requests on free tiers were then, and now are, sufficient to throw your own personal and professional problems at models and see how they do. Everyone can see for themselves this is not hot air - this is an unexpected technological breakthrough that's already overturning way people approach work, research and living, and it's not slowing down.
I'd say: ignore what the companies are selling you - especially those who are just building products on top of LLMs and promising pie in the sky. At this point in time, they aren't doing anything you couldn't do for yourself with ChatGPT or Claude access[0]. We are also beginning to map out the possibilities - two years since the field exploded is very little time. So in short, anything a business does, you could hack yourself - and any speculative idea for AI applications you can imagine, there's likely some research team working on it too. The field is moving both absurdly fast and absurdly slow[1]. So your own personal experience over applying LLMs to your own problems, and watching people around you do the same, is really all you need to tell whether LLMs are hot air or not.
My own perspective from doing that: it's not hot air. The layer of hype is thin, and in some areas the hype is downplaying the impact.
--
[0] - Yes, obviously a bunch of full-time professionals are doing much more work than you or me over couple evenings of playing with ChatGPT. But they're building a marketable product, and 99% of work that goes into that is something you do not need to do, if you just want to replicate the core functionality for yourself.
[1] - I mean, Anthropic just published a report on how exposing "thinking" capability to the model in form of a tool call leads to improvement of performance. On the one hand, kudos to them for testing this properly and publishing. On the other hand, that this was something to do was stupidly obvious ever since 1) OpenAI introduced function calling and 2) people figured out "Let's think step by step" improves model performance - which was back in 2022[2]. It's as clear example as ever that both hype and productization lag behind what anyone paying attention can do themselves at home.
People love to throw jabs at one of the companies that happens to be a champion of patents per year, and also responsible keeping the lights on for many critical projects on the Linux ecosystem since 1998.
The EU is currently building a security framework to avoid being invaded by Russia.
Seriously, "tree of the year" is conceptually nonsensical.
Unless, once a tree wins, it's disqualified for the next half-century?
But then you're basically making a ranked list of the top 50 trees and then approximately repeating it.
I dunno. Maybe restrict it to one of thirty or forty subcategories of trees each year?
Interesting. Do you feel like creating an MCP for a niche devtool would lead you to use it or explore it more? Or is the wariness such that you wouldn't want to?
Can someone explain what the advantages of this technique is? The article only says it's cheaper, without explaining why.
The original story here was published on December 11th 2022, just 12 days after the launch of ChatGPT (running on GPT-3.5).
I feel most of this document (the bit that evaluates examples of ChatGPT output) is more of historic interest than a useful critique of the state of LLMs in 2025.
This kind of argument is like how Gentoo used to be better than anything else, because everyone knows how little piece is built, and here we are two decades later, who still is wasting nights compiling everything.
This is incorrect. In the federal system, while felonies are punishable by imprisonment over a year and misdemeanors by imprisonment up to a year, the important word is punishable - whether the crime is a felony or a misdemeanor is simply inherent to the charge, and if you are convicted, the nature of the crime doesn't change based on the imposed sentence. E.g. if you are convicted of a felony but receive a prison sentence of a year or less, you're still a felon - the charge doesn't just turn into a misdemeanor because you received a shorter sentence.
Right - there are two reasons I think we should define "vibe coding" as meaning LLM generated code we didn't review.
1. That's how the person who coined the term defined it.
2. I think "vibe coding" is a more useful term if we give it that definition.
When I say "I vibe coded that", I'm communicating a very useful piece of information: I'm saying that the code might work and might not, and if you ask me to explain what it does I won't be able to without further work. I'm saying it's prototype quality code that needs more investment before it can be used in production.
If we let "vibe coding" mean any code that an LLM generates it loses value as a differentiating term. If you tell me "I wrote this with the help of an LLM" I'll shrug my shoulders - if the code is reviewed and tested and indistinguishable from code you would have written without that LLM then I care about as much as if you told me used Sublime Text as opposed to VS Code.
What matters to me is communicating "this is unreviewed LLM-generated code that may or may not be production ready but I can't tell you one way or another". THAT is what "vibe coding" means to me.
I'm pretty sure I'm going to lose this one. Vibe coding is already being used to mean "an LLM wrote it" and - even though it's only just over a month old - I think it may be too late to save it.
I can try though!
> Despite extensive research on my part, I was unable to find a study that suggests dieting becomes easier with time, or that the body’s ‘set weight’ can ever be reset lower. It’s like riding an escalator, in that the only way is up. The best way to ‘fix’ obesity is to never become obese in the first place.
https://www.science.org/content/article/fat-cells-remember-b...
https://www.nature.com/articles/s41586-024-08165-7
Fat cell memory must be targeted for the weight loss to stick.
This could be considered a variant of the "bootstrapping" problem.
> Why does it look like there are two suns in two of the photos -- one at the horizon and one above?
I think that its a reflection (on what? internal to the camera optics, somehow?) In both cases, the second sun also seems to have a horizon occluding it, from the opposite direction.
Only if you keep it out of the hands of oligarchs and authoritarians. Otherwise, it will be used to enslave.
It’s hard to believe it would work as a word processor, if you wanted 80x25 you’d have up to 2000 characters on the screen maybe with a few curves or move lines. No matter how you slice it it’s a lot of data and the electronics strike me as crazier than a raster text display and it’s still crazy for a home computer which is more like 40x20 — do you have a huge display list of lines and curves or a character array which indexes display lists for each character?
I'm somewhat involved in local government in Chicagoland and so know where to look to get this figure: federal grants make up 0.2% of the Oak Park Public Library budget. Rayiner is right about this.
The Mythbusters did it for real once. I'd call the results as "within error bars, consistent with same speed drop". Lift does not seem to be a significant factor. Shooters would have noticed. They notice a lot of things.
No, states fuck with DNS all the time; it's probably the single most legible control point on the Internet for governments.
The locations indicated in the job descriptions imply the jobs are not US based, but LatAm based which have lower minimum wages.
This is false. I frequently find myself annoyed at my AC because it only has settings of 72°F and 74°F, and they are a little too cold and a little too warm for me. I want 73°F. When it's around room temperature, you can absolutely tell the difference.
The further away from room temperature, the less we can distinguish. All our senses work logarithmically like that.
I really enjoyed my Vectrex, I had every cartridge and every peripheral AFAICT. I ended up selling it to offset the cost of a fancy monitor for my Amiga :-).
Related Steve Ciarcia did a vector display which was a lot of fun back in the day. You can find it in the back issues of BYTE but here is a scan https://drive.google.com/file/d/1y6SfEN8idhdZ7Y_rNh0zD9uzHEC...
This reminds me of DOS-based Windows which would need to get out of the V86 mode that EMM386 used before going into protected mode itself; a task which was done using the undocumented (at the time) GEMMIS interface.
Another accurate representation, from the article:
https://militaryrealism.blog/wp-content/uploads/2025/03/figu...
> What would you cut?
Were I concerned about fiscal balance, I wouldn't view cutting as the best way to solve the problem, I would raise high-end taxes.
> I do know that the United States is heading for a financial apocalypse unless drastic measures are taken now.
Insofar as that’s true, it is a direct result of the actions taking thus far this administration, not something they are correcting—and not through fiscal imbalance causing wider problems but by a broad economic collapse directly (which, because the broad economy drives revenue, has fiscal balance problems as a second order impact.)
Fair warning, you are standing on the precipice of a very deep rabbit hole :-). My Dad was a 're-loader' (loaded his own ammunition) and molded his own bullets and had charts and graphs for an amazing set of things. I even built him a chronograph so that he could "know" the speed of a bullet as it left the gun.
Given that a 'zeroed in' sight will account for the drop and so points the barrel up slightly, some of the energy in a shot goes to lifting it "up" before it starts falling down. You might think that would make it take longer, but it's small. The sin of 0.5 degrees is only .00873 so less than 1% of "heading up" before heading down. The more 'up' you point it, the bigger the difference. But as the author points out, the impact of various factors varies. Dad was happy he could put three rounds from his 30-06 into a 3" target circle at 200 yds.
Government funded research also has perverse incentives:
1. publish or perish, leading to lots of low quality papers
2. funding doesn't continue if one doesn't get results, leading to selection of "safe" research rather than risky research, and results that cannot be replicated
3. no funding for politically unpopular topics
and, of course, the reasons why people publish overtly fraudulent research papers.
Moral training is useless in the face of the social media flamethrower. It's very easy for people to surround themselves with a feed that tells them that immoral things are Good, Actually. People can reason themselves into anything. The Effective Altruism lot turning into financial crimes with SBF is a good example.
You're not wrong in general here, but C++ is going to get the core of reflection in C++26. I'm not sure enough of the details to know if it supports doing this, however.
Rust on the other hand... that might be ten years.
There's an art to outsourcing, and - even worse than with LLMs - it's not something you can ever just do and forget, because without active management, you'll eventually end up wasting money and time while getting nothing in return.
QA is a whole other story, too. Outsourcing QA is stupid, but even more stupid and short-sighted is not having QA in the first place, and that unfortunately is becoming a norm.
There's lots of false economy going with jobs, too. Getting rid of QA may save you salaries, but the work doesn't disappear - it just gets dumped on everyone else, and now you're distracting much more expensive engineers (software or otherwise), who do a much worse job at it (not being dedicated specialists) and cost more. On the net, I doubt it's ever saving companies any money, but the positives are easy to count, while negatives are diffused and hard to track beyond overall feeling that "somehow, everything takes longer than it should, and comes out worse than it should, who knows why?".
> You will understand once DOGE starts ethnic cleansing
The administration's ethnic cleansing policy is already being executed, DOGE so far hasn't been particularly central to that aspect of the Administration's abuses.
How is that different than non-AI crawlers doing the same for the past decade or so? Tons of businesses engage in site crawling and scrapping, and many of them are bad citizens.
My issue isn't with blocking bad-behaving bots - it's with singling out LLMs (both training and use), or worse, assuming the problem is being associated with AI and not bad bot behavior.
Archive: https://archive.ph/2025.03.22-175608/https://www.theatlantic...
The thing is that in the DOGE era there will still be intense pressure against social media sites no matter what.
> It had all the hype, hands off, cheaper for the same work, faster to market, every other argument you've all certainly heard.
Like some of the other responses, I'm baffled by your comment. Have you not seen what's happened in the past 5 years or so?
Yes, there was an outsourcing craze to India after the .com bubble burst in the early 00s that largely failed - the timezone, cultural differences, and lack of good infrastructure support made it fail.
The past 2 companies I've worked for offshored the majority of their software engineering work, and there was no quality difference compared to American devs. The offshore locations were Latin America and Europe, so plenty of timezone overlap. The companies are fully remote, so what difference does it make if the dev is in your same city or a thousand miles away?
I think offshoring has absolutely put downward pressure on US dev salaries in the past couple years.
I am as optimistic as a factory worker about to replaced by robots.
Hence why I can't stand such techno-optmism, apparently most folks live in an ideal world free of economics.
Not all AI-assisted programming is vibe coding (but vibe coding rocks) - https://simonwillison.net/2025/Mar/19/vibe-coding/
I wrote this because I was worried that "vibe coding" was being misinterpreted to mean "any time an LLM outputs code", as opposed to the intended definition of code where you deliberately don't review the code and see how far you can get.
The reference to Claude Plays Pokemon isn't applicable to the discussion of vibe coding, although the suggestion that AI agents can fix the issues with vibe coding is funny in an ironic way given the disproportionate hype around both.
The issues with Claude Plays Pokemon (an overview here: https://arstechnica.com/ai/2025/03/why-anthropics-claude-sti... ) is essentially due to the 200k context window being finite, which is why it has to use an intermediate notepad. In the case of coding assistants like Cursor, the "notepad" is self-documenting with the code itself, sometimes literally with excessive code comments. The functional constraints of code are also more defined both implicitly and optionally explicitly: For Pokemon Red, the 90's game design doesn't often give instructions on where to go for the next objective, which is why the run is effectively over after getting Lt. Surge's badge as the game becomes very nonlinear.
Although, both vibe coding and Claude Plays Pokemon both rely on significant amounts of optimism around the capabilities around LLMs.
I'd like to come back to this in a year to see how maintainable the codebase is.
I wondering if the authors were laid off from NOAA.
> Are they auto running the retries - in a background job which they handle (like “serverless” background job)?
Yes. The library makes sure that your app retries failed workflows. Or if you use the commercial products, takes care of that for you.
I'd imagine a certain kind of person (a member of the nobility to start with) would think the peak of civilization was France in 1788 or Japan in 1852.
My Facebook is mostly baby pictures and updates of my large extended family. I enjoy seeing my cousin's husband's dad post proudly that his son got a promotion. Why wouldn't you?
If you haven't read this classic about technology deployment from 2015, it's worth a read.
https://reactionwheel.net/2015/10/the-deployment-age.html
Feels like we're still in the exploration phase of GenAI, but ML seems like it is in the deployment phase.
I agree most folks aren't aware of the risks. But I'm guessing for the vast majority of people that are aware of the risks, the thought process is basically along the lines of:
1. I'm simply not that important. There are millions of other people who have given this data to 23 and me and the like, and I'm just some rando peon - nobody is going to be specifically searching for my DNA.
2. The "worst case scenarios", e.g. getting health insurance denied because you have some gene, still seem implausible to me. Granted, there is a ton of stuff I thought would be implausible 5-10 years ago that is now happening, but something like this feels like it would be pushed back against from all sides of the political spectrum, even in our highly polarized world.
3. I haven't murdered anyone, so I'm not worried about getting caught up in a DNA dragnet. Sure, there can be false positives, but to get on in life you pretty much have to ignore events with low statistical probability (or otherwise nobody would even get in a car on the road, and that has a much higher statistical probability of doing you harm).
Sun on the horizon, Earth above it. (See caption.)
> The closest thing you get to principles are civil servants, who are hired for their domain expertise. But voters grumble because those civil servants aren't elected; they are basically self propagating by hiring their successors
It’s a principle-agent problem. Civil servants have principles and act on them, but they’re not necessarily the principles voters share.
The problem, not stated, is that a bankruptcy can wipe out the obligations of a company to its customers. This includes privacy obligations.[1] Especially if the assets are sold to a company outside California or outside the US.
[1] https://harvardlawreview.org/print/vol-138/data-privacy-in-b...
I don't understand these photos at all. Why does it look like there are two suns in two of the photos -- one at the horizon and one above?
Is one of them lens flare or something? I don't think the top one could be an overexposed earth because we're obviously looking at the earth's dark side. And in the other photo (middle in the galley) it looks like a little bit of the earth is eclipsing the second sun.
> A schengen area visa would be very valuable for me, an American
I don't believe any European country accepts 23andMe results as proof of ancestry.
Funny, I'm the exact opposite. It's driving me mad.
Why do I have to scroll down 95% of the way to see some simple examples, when that's the first thing I'm interested in?
Why does it start with a list of "features" of like 25 unorganized bullet points?
Surely the long quotation about European kings belongs in some "history" or "origin" section, rather than the first thing I need to read, when I'm just trying to figure out if this is a) open-source, b) for which operating systems, c) actively maintained, and d) for quick calculations or complicated modeling?
Why do I have to scroll down a couple of page lengths just to get to the table of contents?
This is basically one of the worst landing pages for a project I've ever seen. It completely violates the basic principles of putting the most important information at the top, of organizing content hierarchically (don't ever just put 25 bullets in a row), and trying to show rather than tell whenever possible.
This is a way to express your reservation, pursuant to Article 4(3) of the EU's DSM Directive.
The legal machinery is already in place, we now need precisely that: a standard for machine-readable reservations.
Because I was curious.
(submitted a data deletion request)
Hydrogen could replace fossil gas in the near term for firm fossil generation, makes it even easier for Europe to disconnect from Russia for energy.
> The six-continent combined-America model is taught in Greece and many Romance-speaking countries—including Latin America.
https://en.wikipedia.org/wiki/Continent?wprov=sfti1#Number
All English-speaking countries count them as two continents though.
And in English, it's conventionally "the Americas" even if you believe it's a single continent.
Kind of like, we call it the Bahamas, not Bahama, even though it's one country. It's linguistic, not conceptual.
Unless one is writing something where pauses are a no go, even a tiny µs, I don't see a reason for rushing out to affine type systems, or variations thereof.
CLI applications are exactly a case, where it doesn't matter at all, see Inferno and Limbo.
Amazing! Can you imagine being in that storm? (Not that you could see the same thing as the photo because it was in UV. And you'd probably be dead :) )
Thank you! The original blog post sets up a story and then gives no ending.
> Chang established the basis of this research in a previous publication with an intentionally simplified model that ignored such complexities as geography and migration. Those precise mathematical results showed that in a world obeying the simplified assumptions, the most recent common ancestor would have lived less than 1,000 years ago. He also introduced the "identical ancestors point," the most recent time -- less than 2,000 years ago in the simplified model -- when each person was an ancestor to all or ancestor to none of the people alive today
This is clearly ridiculous. We have historical written records to know this isn’t true.
Yes, "half of all floating-point numbers are in the interval [-1,1]" but measures over real numbers and other continuous spaces are about as toxic as you can get
https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox
In my mind the "real" numbers as described by Cantor ought to be called the phony numbers, whereas there is a lot more reality in the integers or the rationals in that every integer has a name.
You could just as well say that "half of the reals are in the [-1, 1] range" in that you could say there is a 1:1 correspondence between x and 1/x or you could say there are as many reals in [-1, 1] as there are between [π, 2π].
It's interesting that Congress is talking about sunsetting section 230 which provides protection for web sites that display user generated content from being responsible for what that content says, and what Amazon wants here is apparently the equivalent for drop-ship/forwarding sales companies and the products those companies ship through them. But the argument is that safety agencies are unconstitutional?
For me, this suggests a high level of dysfunction in the Government if people asking for things from the Government do so using the biases of the people in power as the basis for their argument rather than reasoning to it by some set of principles. I don't know how much of this article was inferred by the person writing it and how much accurately reflects Amazon's position, so I can't draw strong conclusions from it but it's an interesting reflection on the extreme 'buyer beware' attitude of cut throat businesses that take no responsibility for the harms they inflict on their own customers.
It's crept up gradually on people how harmful the "personalization" model is.
My pet peeve is that many community organizations (such as a board game club and a game development club at my Uni) use Facebook and Instagram as their only communication channels. That kind of platform opens you up to so much bullshit, bullying and cringe (all these girls who look the same who supposedly want to follow me) You might be sharing content which is really wholesome but you have no idea what is getting served right next to it. It's rare to see an organization like this one
https://fingerlakesrunners.org/
where you won't find anything that isn't about running, where moderations can squash things that aren't relevant without cries of "censorship", etc. I wouldn't even mind if a site like that had ads for running shoes or Wegmans or local car dealerships but personalized ads and other recommendations can be so toxic and not things you want associated with your organization or your brand.
Monsanto is part of a German company now which, inexplicably, paid a lot of money to buy being the defendant of lawsuits, see this sad chart...
> coop owned by the customers
I find it a bit hilarious that I, living in the reddest state in the union, have member-owned cooperative power [1] that is 100% wind while my parents in California pay 6x my rate for mostly natural gas [2] and wildfires.
[1] https://www.lvenergy.com/my-account/
[2] https://www.energy.ca.gov/data-reports/energy-almanac/califo...
> Yeah right at the start it says
That's not right at the start, and what is right at the start actually is better support for your point. Is this a copy-and-paste error?
It is good for the dictator and oligarchs, while it lasts.
> Following legal orders from Turkish judges. Just like it does in the US and EU
Musk is notorious for blowing off court orders he doesn’t like. Particularly in America. When he complies, he does so begrudgingly and while levying threats.
So yeah, it is notable that he quietly complies with a Turkish court order while yapping about free speech.
> This is how the monetary system works.
No, its how capitalism works. The monetary system is largely orthogonal.
Not really. I mean, if objects corresponded directly and only to business organizations—the active entities in business—OOP would correspond tolerably well to how “the real business world” works, but as usually implemented OOP is a very bad model for the business world, and particularly the assignment of functionality to objects bears no relation to the underlying business reality.
IME, people don't actually “reason better with objects”, either.
(1) Plant-based milks are not superior to cow milk in nutrition, I'd say that almond milk is closer to soda pop than skim milk and not as good as eating the almonds. Almond milk from California almonds is a greater drain on water resources than cow milk from upstate NY.
(2) Animal agriculture isn't necessarily bad ecologically. Yes, you have to feed 3N to 20N units of food to an animal to get N units of food, but those could be some food that humans can't eat, like grass, that can be produced without tilling the land, on steep slopes for instance. On a large scale an animal feeding operation produces a dangerous amount of waste, on a small scale that "waste" gets put back on the land and builds fertility and soil carbon.
Our farm was beautiful when we bought it 25 years ago but our horse operation has made it even more beautiful and productive (of hay and pasture grass) because we import nutrients in the form of grain and hay and feed back 100% of the "waste" to the soil. I don't know what effect is has on the global carbon balance but I can see the local consequences of building soil carbon.
This can't be the explanation for everything, but I do know that once upon a time just the sheer source-code size of the types was annoying. You have to create a constructor, maybe a destructor, and there's all this syntax, and you have to label all the fields as private or public or protected and worry about how it fits into the inheritance hierarchy... and that all still applies in some languages. Even the dynamic scripting languages like Python that you'd think it would be easy tended to need an annoying and often 100% boilerplate __init__ function to initialize them. You couldn't really just get away with "class MyNewType(string): pass" in most cases.
But in many more modern languages, a "new type" is something the equivalent of
type MyNewType string
or data PrimaryColor = Red | Green | Blue
and if that's all your language requires, you really shouldn't be afraid of creating new types. With such a small initial investment it doesn't take much for them to turn net positive.You may need more, but I don't mind paying more to get more. I mind paying more just to tread water.
And I find they tend to very naturally accrete methods/functions (whatever the local thing is) that work on those types that pushes them even more positive fairly quickly. Plus if you've got a language with a halfway modern concept of source documentation you get a nice new thing you can document.
Rust does not keep variables alive until references are gone. It checks to make sure that any reference lives as long as or shorter than its referent. If the reference is longer lived than the referent, that’s a compile error.
A GNU/Linux distribution, in Android the use of the Linux kernel is an implementation detail.
Besides Java and Kotlin userspace, you are only allowed to use these APIs on the NDK, notice none of them are Linux, rather C and C++ standard libraries, Khronos APIs and Android specific ones.
Here's a really interesting line from that story:
> In a message found in another legal filing, a director of engineering noted another downside to this approach: “The problem is that people don’t realize that if we license one single book, we won’t be able to lean into fair use strategy”
I hadn't heard that idea before. Any IP law experts able to give useful context on that one?
Elon can’t “gut” local libraries—local libraries are run by municipal governments and receive almost all their funding from there.
Total U.S. public library and museum spending is almost $14 billion: https://wordsrated.com/library-funding-statistics. This agency distributes about $250 million in federal support, which also supports museums.
https://archive.ph/tGYT1