What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
If the author sees this comment, https://news.ycombinator.com/item?id=43168838 might be relevant as it relates to catalogue completeness. OpenLibrary is very good, but Anna's Archive is potentially more complete.
When doing feature detection for execution path selection, it’s sometimes useful to run some quick benchmarks to see which path is objectively best.
Now we have two-ish implementations or x86, but back in the 1980s and 1990s we had quite a few, some with wildly different performance characteristics.
And, if we talk about ARM and RISC-V, we’ll have an order of magnitude more.
HN wants Firefox but with better stewardship and fewer misdirected funds.
Mozilla - wrongly - believes that the majority of FF users believe in Mozilla's hobby projects rather than that they care about their browser.
That's why - as far as I know - to this day it is impossible to directly fund Firefox. They'd rather take money from google than to be focusing on the one thing that matters.
It's a Chrome fork. If you want to use Chrome... just use Chrome.
And this is why the price for Twitter was, in the end, remarkably low.
I’m concerned not everyone have their base colors set sensibly - as in “there’s no guarantee the base garish RGB green is green on this machine”. Maybe the right thing would be to put the color closest to a base color on the corresponding corner of the RGB cube, but that’s also not ideal - I have had terminal palettes that were all green or all orange/red, or the green/yellow of EL displays.
Maybe applying the saturation of the base set across all the generated palette would also work.
Why did the Trump regime not discover and eradicate this heretical sentence?
Does your router not support UPNP for dynamic port punching?
If we think of the many worlds interpretation, how many universes will we be making every time we assign a CCUID to something?
Starts interesting, then veers into the usual "true random number" bullshit. Use radioactive decay as source of your random numbers!
I take it you've never suggested to a front-end dev that maybe their contact form doesn't need a 1MB+ of JavaScript framework and could just be HTML that submits to a backend.
I dunno. I'm a big offender, but maybe making things that don't look at all look Bootstrap will help!
This piece is missing the most important reason OpenClaw is dangerous: LLMs are still inherently vulnerable to prompt injection / lethal trifecta attacks, and OpenClaw is being used by hundreds of thousands of people who do not understand the security consequences of giving an LLM-powered tool access to their private data, exposure to potentially untrusted instructions and the ability to run tools on their computers and potentially transmit copies of their data somewhere else.
As I recently observed [1], there is a lot more of this sort of coordination than people realize. I personally know of about three groups trying to get some cross-state initiatives implemented at the state level, and I'm not even particularly looking for such things.
It is not a coincidence. It means there is some organization out there pushing these. In general, "organization" here applies very broadly; there are some cases where it pretty much is just more-or-less normal people who organize to get something done. I wouldn't expect this particular thing is that for a second, of course. I'm just saying in general the term applies broadly. Someone is organized and trying to push this.
> Its PPC ad platform is completely predatory, loaded with dark patterns and hidden defaults that add billions to top-line revenue while strip-mining the accounts of sellers who often have no choice but to participate in the auctions.
At least they mark ads as 'sponsored', even though it isn't super prominent.
I always scroll until I see organic results, myself.
Bollocks the wavelengths are on the order of hundreds of meters, there is no way you get microwave like heating out of that. Even at 30 MHz you're still looking at 10 meters wavelength, 3 meters at 100 MHz.
This system operates according to TFA up to the end of the AM band at roughly 1600 KHz, so 180 meters and change.
The danger is more likely there because someone might enter the tunnel and hit the feeder, which depending on the design can carry considerable power.
Why would they want to train on random garbage proprietary emails?
If their models ever spit out obviously confidential information belonging to their paying customers they'll lose those paying customers to their competitors - and probably face significant legal costs as well.
Your random confidential corporate email really isn't that valuable for training. I'd argue it's more like toxic waste that should be avoided at all costs.
This decreases the salience of DANE/DNSSEC by taking DNS queries off the per-issuance critical path. Attackers targeting multitenant platforms get only a small number of bites at the apple in this model.
https://www.influencewatch.org/non-profit/power-the-future/
> Daniel Turner is the founder and as of 2025 was the executive director of Power the Future. Turner is an commentator on energy and environmental issues especially as they relate to jobs, rural communities, and the U.S. economy. Turner formerly worked as director of strategic communications at the Charles Koch Institute, vice president of communications at Generation Opportunity, and network and cable liaison director for the 2012 Republican National Convention.
So, yeah.
Swap the batteries for sodium ion chemistry.
> Where the CATL Naxtra battery really stands out, however, is cold-weather performance. CATL says its discharge power at -30 degrees Celsius (-22 degrees Fahrenheit) is three times higher than that of LFP batteries.
They can also charge at temperatures as low as -30°C to -40°C.
The first sodium-ion battery EV is a winter range monster - https://news.ycombinator.com/item?id=46936315 - February 2026
Take a look at the fraction of slop posts about AI on /new and what gets to the front page. Frankly who cares about another model that is samey with all the other models? We all know there's going to be a better one in 0.10 sec.
Take 2 of these and call me in the morning: https://en.wikipedia.org/wiki/Chlorpromazine
Berkshire Hathaway, not Warren Buffett.
Large stock sales always make headlines but they don't automatically signal bearishness or really anything else. After all what's the point of investing if you never realize gains?
They made Mark Zuckerberg into the monster he is too.
The trauma he had from not being funded by the VC companies in Boston made him unable to listen to anybody about absolutely anything -- even his lawyers when they tell him not to have an incriminating paper trail for all his irresponsible decisions.
... saw a slop AI startup, hit the back button
The 2nd amendment is the only constitutionally-guaranteed right these days. And that too only if you have the correct political views.
Marketing is hard.
I skipped past the demo link at the top the first time I scanned through it, it could be more visible.
> The government sector doesn't produce virtually anything that counts toward GDP…
By a very narrow, technically-correct view, sure.
Preventing, say, people starving to death, dying of preventable disease, unchecked fraud, invasions by other countries, etc. probably helps GDP a bit.
Get over your FOMO:
I walked into that room expecting to learn from people who were
further ahead. People who’d cracked the code on how to adopt AI at scale,
how to restructure teams around it, how to make it work. Some of the
sharpest minds in the software industry were sitting around those tables.
And nobody has it all figured out.
People who say they have are trying to mess with your head.
Except unlike Linux syscall interface and like almost every other OS out there, ABI compatibility is an accident, not a guarantee.
Luck has always been a solution to reach. It doesn't scale, though.
In the case of OpenClaw I think you're looking at a fairly pure iteration of luck there, too. It isn't even a case of "I prepared for years until luck finally knocked" or any variant like that. It was just luck.
If that is the only counterexample I'd say it doesn't disprove the point, if anything it just strengthens it. Nobody can build a business plan based on "I plan to be as lucky as OpenClaw".
It's more that a personal post may or may not be interesting to even one other person, but a post that somebody else recommended was interesting to at least one other person. And that's a big gap!
Thank you. Didn't know about this. Very interesting.
Yeah, but lets keeping downplaying use-after-free as something not worth eliminating in 21st century systems languages.
AI Isn't Stalling — You can't write a post title that wasn't written by AI!
(Seriously, you might as well wear a metal mask if you're going to use an em dash!)
Guest pass link (hope it works): https://www.nytimes.com/2026/02/18/opinion/ai-software.html?...
> LLMs are eating specialty skills. There will be less use of specialist front-end and back-end developers as the LLM-driving skills become more important than the details of platform usage. Will this lead to a greater recognition of the role of Expert Generalists? Or will the ability of LLMs to write lots of code mean they code around the silos rather than eliminating them?
This is one of the most interesting questions right now I think.
I've been taking on much more significant challenges in areas like frontend development and ops and automation and even UI design now that LLMs mean I can be much more of a generalist.
Assuming this works out for more people, what does this mean for the shape of our profession?
I was going to see if I could quote some job postings from my employer to compare this, and then discovered that even the intranet jobs board does not have salary ranges posted. Sigh. Going to have to feed that back to someone.
> Software engineers make great digital logic verification engineers. They can also gradually be trained to do design too. There are significant and valuable skill and knowledge crossovers.
> Software engineers lack the knowledge to learn analogue design / verification, and there’s little to no knowledge-crossover.
Yes. These are much more specific skills than HN expects, you need an EE degree or equivalent to do analogue IC design while you do not to do software.
However I think the very specific-ness is a problem. If you train yourself in React you might not have the highest possible salary but you'll never be short of job postings. There are really not a lot of analogue designers, they have fairly low turnover, and you would need to work in specific locations. If the industry contracts you are in trouble.
I can't really see much that stands out, as a Pixel 7 owner; just a faster CPU and slightly tweaked cameras?
Crucially, this wouldn't be an issue if the AI ran locally, but "sending all your internal email in cleartext to the cloud" is a potentially serious problem for organizations with real confidentiality requirements.
I think you misunderstand.
Taking notes during meetings isn't to improve understanding, or to "read" afterwards.
They're a record of what was discussed and decided, with any important facts that came up. They're a reference for when you can't remember, two weeks later, if the decision was A and B but not C, or A and C but not B.
Or when someone else delivers the wrong thing because they claim that's what the meeting decided on, and you can go back and find the notes that say otherwise.
I probably only need to find something in meeting notes later once out of every twenty meetings. But those times wind up being so critically important, it's why you take notes in the first place.
Previously: https://news.ycombinator.com/item?id=29476545
And to try to get them execute bb(5) ;)
Are you accepting feature requests?
Last season's Brother Dude was awesome. I really felt sad for him. I have to say, however, my tolerance for manipulative sociopaths is very low - I'd totally punch McMillen in the face.
I was only aware of The Fall for its brilliant photography.
> Is OpenCL a thing anymore?
I guess CUDA got a lot more traction and there isn't much of a software base written for OpenCL. Kind of what happened with Unix and Windows - You could write code for Unix and it'd (compile and) run on 20 different OSs, or write it for Windows, and it'd run on one second-tier OS that managed to capture almost all of the desktop market.
I remember Apple did support OpenCL a long time ago, but I don't think they still do.
the money quote (at least for some)
In fact, the current state of M3 support is about where M1 support was when we released the first Arch
Linux ARM based beta; keyboard, touchpad, WiFi, NVMe and USB3 are all working, albeit with some local
patches to m1n1 and the Asahi kernel (yet to make their way into a pull request) required. So that
must mean we will have a release ready soon, right?
This sounds like a strange set of examples that may have been scams from the start. Can you name them?
The UK is full of long lasting charitable foundations. Many attached to schools and universities, but the highest profile example is probably the National Trust and its collection of historic buildings.
And the fact that you can make it 3x faster substantially increases the chances that nobody will read it in the first place.
Something is badly borked when the protections against an imaginary problem cause a real problem.
I wonder how far the balance will have to tip before the general public realizes the danger. Humanity's combined culture, for better or worse, up to 2021 or so was captured in a very large but still ultimately finite stream of bits. And now we're diluting those bits at an ever greater speed. The tipping point where there are more generated than handcrafted bits is rapidly approaching and obviously it won't stop there. A few more years and the genuine article is going to be a rarity.
This is a limitation of UNIX terminals, in other platforms not tied to a no longer existing tty interface, this isn't an issue.
Unfortunely, given that we are stuck with UNIX derived OSes, this is indeed a possible issue.
However I would argue, for fancy stuff there is the GUI right there.
> Note that Hegseth and the defense department broke Anthropic’s terms when they used Claude as part of the Venezuela invasion
Venezuela was executed precisely in part because Hegseth was sidelined. This is SecDef throwing a hissy fit because Rubio can run his department better remotely than he can running around trying to make homoerotic workout videos.
Claude is good. It worked in Venezuela. Sidelining it because Hegseth is being a snowflake is how we lose wars.
I haven't seen the Public Image Ltd logo in a very long time.
One of my niche hobbies is trying to coin new terms - or spotting new terms that I think are useful (like "slop" and "cognitive debt") and amplifying them. Here's my collection of posts that fit that pattern: https://simonwillison.net/tags/definitions/
Something I've learned from this is that semantic diffusion is real, and the definition of a new term isn't what that term was intended to mean - it's generally the first guess people have when they hear it.
"Prompt injection" was meant to mean "SQL injection for prompts" - the defining characteristic was that it was caused by concatenating trusted and untrusted text together.
But people unfamiliar with SQL injection hear "prompt injection" and assume that it means "injecting bad prompts into a model" - something I'd classify as jailbreaking.
When I coined the term "lethal trifecta" I deliberately played into this effect. The great thing about that term is that you can't guess what it means! It's clearly three bad things, but you're gonna have to go look it up to find out what those bad things are.
So far it seems to have resisted semantic diffusion a whole lot better than prompt injection did.
Remember, mass copyright infringement is prosecuted if you're Aaron Schwartz but legal if you're an AI megacorp.
This is so out of hand.
There's this. There's that video from Los Alamos discussed yesterday on HN, the one with a fake shot of some AI generated machinery. The image was purchased from Alamy Stock Photo. I recently saw a fake documentary about the famous GG-1 locomotive; the video had AI-generated images that looked wrong, despite GG-1 pictures being widely available. YouTube is creating fake images as thumbnails for videos now, and for industrial subjects they're not even close to the right thing. There's a glut of how-to videos with AI-generated voice giving totally wrong advice.
Then newer LLM training sets will pick up this stuff.
"The memes will continue" - White House press secretary after posting an altered shot of someone crying.
and how do you know it is not going to be an inverted j-curve ?
My question is are there any historical parallels for the slide toward authoritarianism being reversed without a major catastrophe/war.
There were many "ground rules" in American society and politics that Trump has just proved can be thrown completely out the window, and it feels like there is no unringing that bell.
> every company has someone monitoring HN like a hawk.
Monitoring specific user accounts or keywords? Is this typically done by a social media reputation management service?
Tesla is the world’s largest meme stock. People stopped applying rational pricing models, and rationality in general, to it a lot time ago.
The irony is that this especially true for Coca Cola. They are basically an advertising company at heart. They sell flavored sugar water. For all the hype about "are you a coke person or a Pepsi person", in blind tests most people can't tell the difference between coke and generic cola. The billions they spend in marketing annually helps ensure they can sell their flavored sugar water for a lot more than Aldi sells their store brand flavored sugar water.
Anthropic and OpenAI are both well documented as losing billions of dollars a year because their revenue doesn't cover their R&D and training costs, but that doesn't mean their revenue doesn't cover their inference costs.
It's quite good, but it gets very Six Feet Under by the end, and you have to suspend a lot of disbelief about technology; it's a little like Hackers in the sense that it's trying to communicate a feeling about operating in specific eras of computing, but not so much trying to realistically depict what it was like.
Christopher Cantwell, the showrunner, is also doing the new series of The Terror (aka North Pole Bear Show) that's premiering this year.
I ~like~ love Obsidian. I also like Steph Ango and his philosophies. In fact, a lot of his ideas shaped and improved mine. His approach is opinionated.
So pick the good ones you like and make your own.
For instance, I’m pretty well-organized, and I like it that way. This leads me to native organizations using folders and some patterns that I learnt aloong the way. Nothing more complicated. One day, if I have to walk off Obsidian, I can, and I will still know where things are.
Right now, my organization is a loose combo of PARA[1] and Johnny Decimal.[2]
Obsidian is another tool; it just happens to be one hell of a good tool.
Just to be clear, the article is NOT criticizing this. To the contrary, it's presenting it as expected, thanks to Solow's productivity paradox [1].
Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.
The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy.
And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices.
Mandatory XKCD: https://xkcd.com/1235/
It's interesting that the proliferation of cell phone cameras has not improved the quality of UFO reports much.
Nor has the availability of automatic UFO-spotting cameras.[1] They pick up drones, flocks of birds, and the International Space Station. But no good UFO shots.
LINEAR and GEODSS, which find near-earth objects and satellites using a pair of large telescopes at each site, have been running for decades, somehow don't seem to be picking up UFOs.
[1] https://www.space.com/spotting-ufos-sky-hub-surveillance
Videos are streaming but the quality seems to be low; on a few videos I checked, format 136 which is supposed to be 1080p is currently actually only 360p. Possibly a bandwidth problem.
> they also improved the way the characters are drawn so much that it lost it's crude nature.
Computers.
I, too, liked the rawness of the earlier hand-drawn ones.
99.9% of people understood that sentence to be correct, in the spirit in which it was written. Yet there are people who don't, but we still wouldn't say the sentence is false.
And you must have a long position if you're going to cherry pick so egregiously. The other incidents from that same paragraph that you conveniently left out:
* a collision with a fixed object at 17 miles per hour
* a crash with a truck at four miles per hour
* two cases where Tesla vehicles backed into fixed objects at low speeds.
So in the 5 cases listed in that paragraph, 3 of them were when a Tesla hit a stationary object. Hitting a stationary object should be like the last thing I would think an autonomous vehicle would have trouble with, but if you got rid of lidar and radar because Elon had a fever dream, maybe it's not so unexpected.
> And remember, this kind of effect is supposed to be so robust and generalizable that we can deploy it in court.
This goes for a lot of things that are utter bullshit. Lie detectors, handwriting, many others and the big bad bogeyman of the court: statistics.
Eyewitnesses being unreliable is one thing, but expert witnesses believing their own bs should be a liability if they are found to be wrong after the fact.
Watsi is without a doubt the best thing to come out of YC.
Their goal is to disenfranchise women, their examples are simply cover for that objective.
A lot of the lightweight cipher justification in this post seems like it overlaps a lot with Format Preserving Cryptography, which uses (generally) more conventional symmetric primitives (16-byte-block ciphers, for instance) to handle encryption with small domains:
> We've had shameless leaders for many generations
The magnitude is different. Bovino and Miller act above the law. And the corruption and graft among some cabinet members is off the charts. (This following a Presidency where the preëmptive pardon was pioneered for family members.)
I'm not entirely sure that's a bad thing. If you aren't proud enough of it to attach your real name, or your pseudonymous account, maybe it shouldn't be posted.
Their ratings would tell another story. Still one of the top rated shows.
> robotaxi is the name of the tesla unsupervised driving program
“robotaxi” is a generic term for (when the term was coined, hypothetical) self-driving taxicabs, that predates Tesla existing. “Tesla Robotaxi” is the brand-name of a (slightly more than merely hypothetical, today) Tesla service (for which a trademark was denied by the US PTO because of genericness). Tesla Robotaxi, where it operates, provides robotaxis, but most robotaxis operating today are not provided by Tesla Robotaxi.
The most interesting thing here to me is the leaderboard, because they actually included the estimated price per game. Gemini gets the highest score with a fairly reasonable cost (about 1/3 of the way down).
The legal system is totally inadequate to deal with the LLM era. It's extremely expensive to sue someone for libel; best you can usually do is win in the court of public opinion.
A core part of the HN ethos is avoiding siloing dynamics, which is exactly what [NOAI] would be.
200% YoY growth in Europe. Surpassed Ford in global auto sales, selling only EVs. Largest private employer in China. EV printer go brrr.
BYD uses aggressive discounts in bid to make Germany its leading European market - https://www.autonews.com/byd/ane-byd-discounts-germany-sales... - February 17th, 2026
China's BYD Overtakes Ford in Global Sales for the First Time - https://finance.yahoo.com/news/chinas-byd-overtakes-ford-glo... - February 12th, 2026
BYD's European registrations surge 270% in 2025 while Tesla slips 27% - https://cnevpost.com/2026/01/27/byd-european-registrations-s... - Jan 27th, 2026
BYD Sold Nearly Three Times As Many Cars As Tesla In Europe - https://www.carscoops.com/2025/11/byd-sold-nearly-3-times-as... - November 26th, 2025
The size of BYD's factory - https://news.ycombinator.com/item?id=42228138 - November 2024 (615 comments)
> We are effectively getting the same intelligence unit for half the compute every 6-9 months.
Something something ... Altman's law? Amodei's law?
Needs a name.
> to reach mass adoption, self-driving car need to kill one every, say, billion miles
They need to be around parity. So a death every 100mm miles or so. The number of folks who want radically more safety are about balanced by those who want a product in market quicker.
> The average consumer isn't going to make a distinction between Tesla vs. Waymo.
I think they do. That's the whole point of brand value.
Even my non-tech friends seem to know that with self-driving, Waymo is safe and Tesla is not.
What is this even in response to? There's nothing about "playing dead" in this announcement.
Nor does what you're describing even make sense. An LLM has no desires or goals except to output the next token that its weights are trained to do. The idea of "playing dead" during training in order to "activate later" is incoherent. It is its training.
You're inventing some kind of "deceptive personality attribute" that is fiction, not reality. It's just not how models work.
India's Solar Manufacturing Excesses Turn a Boom into a Glut - https://news.ycombinator.com/item?id=47050286 - February 2026
Ember Energy: India’s electrotech fast-track: where China built on coal, India is building on sun - https://ember-energy.org/latest-insights/indias-electrotech-... - January 22nd, 2026
(India currently has 154.4GW/year of solar manufacturing capacity, 3x the country's current annual demand)
Java and .NET IDEs have had this capabilities for years now, even when Eclipse was the most used one there were the tips from Checkstyle, and other similar plugins.
Russia can retreat inside its internationally recognized borders and negotiate a ceasefire at any time.
This is already happening in C++, NVidia is the one pushing the senders/receivers proposal, which is one of the possible co-routine runtimes to be added into C++ standard library.
On the one hand, that's the point of the article. That it ceases to be a useful diagnostic indicator.
On the other hand, if there are 100 places in the shoulder where you can have an abnormality, and most people have just one or a couple but the other 98-99 are normal, then each one individually really is abnormal.
So it's complicated, and then it becomes important to figure out which abnormalities are medically relevant, in which combinations, etc.