What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
The original comment has conflated every ill into "have to", combined with political fatalism. Not unreasonable given the way things have turned out, but yes it's not inevitable either. It's just the direction of travel that the majority chose.
Certain "have to" are imposed by the physical world. The world will have to use less oil in 2026 than in 2025, because production has been so heavily impacted by the war. What happens beyond that .. well, only a fairly small number of people get to make that decision. Next US presidential election is in 2028.
> made-up numbers
It's important to note that in most jurisdictions you can't actually do this legally? Like, you may be able to get away with it, but it is actually illegal to sell financial services by misrepresentation?
I think both Elsevier and the people that appropriate IP for training commercially deployed AIs purpose without the consent of the author(s) should be legal.
I don't think that's the reason. Modern supercars have so much power that the average person that can afford them is going to wreck the drivetrain in a very short while if they have to manage all that power themselves. Automatic gearboxes are far more forgiving. You see the same with Porsches that have manual gearboxes, if you read out the ECU you'll see them overrev many times more than with the autos, if at all (in fact I don't recall seeing an auto that had overrevved).
Discovery is a lot more intrusive than people expect.
Notably, this applies to the "product tour" a lot of products want to give you when they've added new features and I find this particularly obnoxious, especially with Adobe tools.
Like a lot of times when I am using Lightroom I just shot 3000 photos at a sports game and feel under the gun to select a few out and develop them or I am using Acrobat to handle some stressful paperwork which is late. I close 100s if not 1000s of modal dialogs that never should have been opened every day and just don't need another one.
It's bad from the viewpoint of Adobe because I wind up dismissing these messages out of hand.
Adobe wants me to see the value I am getting from my Creative Cloud subscription, like I am likely to keep paying for it if I enjoy more features in more of the products. Like lately I discovered Adobe Fonts is great: like I find looking for free fonts is the most depressing thing in web development and graphic design, I can spend hours looking at fonts and making comps and thinking "I can't stand that 'k'". Adobe Fonts on the other hand has quality fonts that are well organized and often I can put in 15 minutes and walk out with something that works so well with my brand that if I want to set stuff in that font with Pillow of course I am going to plunk down $90 and buy it -- I don't feel bad at all that the fonts are tied to Adobe tools and my CC subscription.
In terms of execution you just expect something like this to be crap. The integration of Adobe Fonts into Photoshop is broken: it can lock Photoshop hard and force you to kill the process. On the other hand it works great with Illustrator. Marketing-driven development always seems to have a lack of empathy and attention to quality that in the end is self-defeating.
---
Lately I've gotten hooked on the mobile games Arknights which has extensive lore, too many game modes to count and very complex mechanics and hundreds of characters who have unique abilities (e.g. even the "trash" 3-star characters usually have something special about them and are designed to make teams that punch above their weight)
Arknights gamifies learning the game and engaging with the mechanics by offering daily, weekly, and campaign rewards for taking actions, completing levels, developing characters, etc. This is part of a number of mechanisms that gradually get you up to speed on the game mechanics, reveal the world, etc. These kind of mechanisms, used gently, could work for applications software.
But I think timing is everything. One of the most annoying people in downtown Ithaca is a panhandler who comes up from behind and starts demanding money or the bandanna off your head, he doesn't bother to make eye contact, he doesn't look to see if you're receptive or for a moment when you might be open, he just makes demands and gets angry when you deny or ignore him. I give money to panhandlers quite often if they engage me person-to-person and are agreeable but this guy is like so much application software today.
Thanks for sharing, that brings back good memories.
Here is another one, from the first JavaStation,
https://www.youtube.com/watch?v=yxV_pR1ZsXM
Sun was my favourite UNIX vendor, oh well.
Probably CGNAT, "Carrier-Grade NAT": https://en.wikipedia.org/wiki/Carrier-grade_NAT
Huge, huge numbers of machines behind a single external IP mean that your internet access carries all their reputation by proxy. Since switching off Comcast to a smaller fiber company that uses CGNAT I've seen somewhat more Cloudflare challenges.
3rd edition (2025) free to read online
jupyter notebooks: https://github.com/fchollet/deep-learning-with-python-notebo...
> useless policy season
What does this mean?
> these companies
I think the conclusion the market is rapidly and correctly reaching is we aren’t in an AI bubble, we’re in an OpenAI bubble.
Google, Amazon and Anthropic look likely to see ROI on their capital investments because they’ve made them halfway reluctantly. Microsoft is up in the air. Not sure what Meta is doing. And with the benefit of hindsight, OpenAI used capex as a marketing strategy with investors (while Sam Altman materially lied about his compensation and somehow looped Paul Graham and Jessica Livingston, founder of The Information, into his racket).
It's definitely the sort of thing that Crowley from Good Omens would be working on.
For when you need to store a copy of the internet, and have been granted immunity for your copy of Anna's Archive.
A long time ago I did that to make Canonical's Launchpad easier to read - mostly making tables look nicer and so on. I was really nice. I saw similar initiatives at Workday as well - browser plugins that added extra functionality to the development instances of the application.
While it incurs a programming issue, the microcontroller will generally be more stable, less temperature sensitive, and consume less power.
LLMs are accelerants. They enable people to do patent and copyright infringement at a much larger scale. As we know from previous examples, if you break the law enough as a company eventually they have to let you keep doing it.
It was notable that in Minneapolis enough people were doing this kind of thing that ICE were seriously impaired, and had to resort to escalation and shooting Americans in the street.
Yes, Microsoft suffers from schizophrenic management, it is easier for externals to talk between teams than internal teams themselves, there are quite a few stories on the matter.
Apple really doesn't care how apps are used, Radar issues go untouched for several releases.
EDIT: missing "management".
Those Windows mistakes have been sorted out for a long time now.
Yes, and all the paper straws in the world isn't going to save it, between wars, attacks on critical infrastructure and AI overlords.
Agreed, which is why my stance on the matter at least on what I have control over, is either GPL/LGPL, or commercial license.
"Be entitled to whatever one is willing to give upstream" is my motto.
3 DOF per leg, so it needs 12 motors and controllers. Getting that under $1000 is nice.
Here's the US$18 motor: [1] Those things are getting really cheap. He did have to rewind it, though, for more turns with thinner wire. The manufacturer mentions that you can order with "custom Kv", which means you might be able to get a different winding from the factory if you order a reasonable quantity. Especially if you tell them that makes them "robot motors".
Motor overheating might be a problem. The dog, just standing, has its motors stalled under load, converting power to heat. Drones don't do that. Temperature feedback would help if this thing has to operate for extended periods. Remember yesterday's article on humanoid robots and their cooling problems.
The motor controller is nice too, and cheap at $49. Needed fixes to the firmware, but that's not surprising at the price. High performance motor controllers used to cost about $1000.
Repurposed drone technology has done wonders for legged robots. We're not quite at the point where limb drive hardware is off the shelf, but it's way better than it used to be.
[1] https://www.xntyi.com/tyi-5008-kv335/kv400-high-speed-brushl...
> But these LLMs are like Happy Gilmore. They get to the green in one shot then they orbit the hole with an extremely dubious short game.
Except that he got good at his short game by the end. LLMs will get there sooner than we think.
True, but contrary to the fruity models, some of these are upgradedable.
My Asus netbook started with basic configuration and was maximised during its lifetime, just like any PC desktop.
The transfer rates limit how much each chip can be active at any given time, so a heat-aware writing allocator can pick the least active blocks for the next writes and distribute the heat accordingly. Even if it’s not heat-aware, the tendency will be that the writes will be distributed over as many chips as there are, and so will be the heat generated.
Now, I would LOVE to see this much SLC flash on a direct to bus attachment setting.
QLC NAND
The datasheet shows 3GB/s sequential write, which for 245.76TB means writing the whole drive takes around 22h45m. Odd that the endurance is specified as "1.0 SDWPD", which is almost meaningless since the drive takes roughly that long to write at full speed.
At scale, 1.9 times more energy is required for an HDD deployment
...but those HDDs are going to hold data for far more than twice as long. It's especially infuriating to see such secrecy and vagueness around the real endurance/retention characteristics for SSDs as expensive as these.
On the other hand, 60TB of SLC for the same price would probably be a great deal.
Trump also fired the Immigration Detention Ombudsman.[1]
[1] https://www.independent.co.uk/news/world/americas/us-politic...
Which is why exclusives matter, you don't go to Steam if the game is only on one of Switch, PlayStation, XBox, XBox PC, Android, iOS, Apple Arcade,.....
Oh! Dear Lord. I still want to hear my Indian friends speak Indian to me during Support Calls. These days, I’m hearing American accents trying to calm me down over my complaints on that excess masala in the idli-dosa-pav-bhaji butteerr-chicken combo in the El Camino Eatery in the outskirts of Jhalandar.
Not the OP, but it might be that AI isn't as good at systems programming as it is at other domains, or it might be that you're using it differently than I am. I don't know which one it is (maybe AI just isn't good at writing the language you work with).
For things like web frontents/backends, though, it works beautifully. I ship things in days that would take me weeks to write by hand, and I'm very fast at writing things by hand. The AI also ships many fewer bugs than our average senior programmer, though maybe not fewer bugs than our staff programmers.
I just created (yesterday) a product tour I'm pretty proud of:
It's a writing reviewer app, and the landing page is the product. It's literally a document with a critique. You can write in it, use the editor, even delete the whole page.
I always skip tours, but I think this kind of thing (if your product can support it) is much better. Then again, this isn't so much a "you've logged in, now let us teach you how to use this product" as a "welcome, here's what this product does".
You just described 95% of the parts of all software, especially in this era.
Yes, that's the problem
> the LLMs will ship code the LLMs understand, and whether any human specifically understands any particular part will mostly not matter.
I find this particularly funny. There were more than a couple Star Trek Episodes where some alien planet depends on some advanced AI or other technology that they no longer understand, and it turns out the AI is actually slowly killing them, making them sterile, etc. (e.g. https://en.wikipedia.org/wiki/When_the_Bough_Breaks_(Star_Tr... )
Sure, Star Trek is fiction, but "humans rely on a technology that they forget how to make" is a pretty recurrent theme in human history. The FOGBANK saga was pretty recent: https://en.wikipedia.org/wiki/Fogbank
It just amazes me that people think "Sure, this AI generated code is kinda broken now, but all we need is just more AI code to fix it at some unknowable point in the future because humans won't be able to understand it!"
Not too long ago, someone submitted an AI demo to HN that resulted in a 3.1GB download upon visiting the page: https://news.ycombinator.com/item?id=47823460
It reminds me of the "dialup warnings" common 2 decades ago on huge pages (often containing many images). Yes, bandwidth and storage has gotten cheaper, but the unwanted waste should still be called out. I'm not even anti-AI, having waited several hours recently to get some local models to experiment with, but that's because I wanted to and made the decision to use that bandwidth.
>That is exactly where the disagreement stems from. That app is a draft version that might work for a couple people. It won't scale. It won't be secure. It won't handle edge cases. It won't be flexible enough to iterate based on customer feedback.
As if startup code doesn't have the same issues pre-AI? And still they get to billions of valuations with such code.
They can always pay some beefier consultants when they absolutely have to, for scaling it up or hardening it.
That "it won't be flexible enough to iterate based on customer feedback" is more wishful thinking. It would be code like any other code, following some patterns. In fact, the architecture can be fine tuned by the human in the loop anyway - they just wont be needed 5 more humans to assist them to code it.
>Because if all they do is: What humans do, just faster... cool, useful, but not worth all the hype.
That's literally what automation in any field is. Why should be something more, as if this huge breakthrough is already taken for granted within a few years of being available?
It would be useful if this site included a human-readable explanation of what terms like "Disposition: Diversion" compared to "Disposition: 871PC/No Sufficient Cause" meant.
Or a clear definition of what the question "Which released charge sounds worse?" means.
What would I need to rant about? Sometimes the world does my ranting for me.
> was surprising. Goes against the idea that deregulation allows companies to squeeze consumers and earn excess profits.
That's because this assertion is economically illiterate. Deregulation can lead to increased profits where otherwise companies have monopoly power. But often, the regulation was there in the first place to ensure that companies had sufficient profit to invest in expensive infrastructure. (E.g. railroads).
He was always a tool outside his narrow field. Just got a lot of fans for saying basic atheist 101 as if he invented it (and even that, naively).
Unless Apple comes up with a novel memory, which I wouldn’t put beyond Cupertino, it makes more sense to participate in economies of scale.
> She also successfully applied for an outdoor seating permit through the Police e-service, which didn’t require BankID. Her first submission included a sketch she had generated herself, despite having never seen the street outside the café. Unsurprisingly, the Police sent it back for revision. [...]
> When she makes a mistake, she often sends multiple emails to suppliers with the subject “EMERGENCY” to cancel or change the order.
I really don't like these research projects which waste the time of real human beings who haven't opted into the experiment.
>Memory designs are pretty entrenched with the various patents involved...
Can't be any more entrenched than CPUs, GPUs, and broadband chips, which Apple still designs.
Doesn't matter. He did it through the use of AI, and AI, despite explicitly told otherwise, deleted the database.
Both he should learned his lesson AND AI should not be trusted.
> Of course, people who never approached agriculture will be appalled at this, and call it great injustice.
Uneducated rice farmers in Bangladesh would understand the problem better than the people complaining about this.
> for food security
They overproduce for votes. Countries without farmer blocks swinging elections stockpile non-perishables for food security.
As an owner of an apple tree: that's great for about two months, but I don't have commercial quantities of cold storage.
He can have my psych meds if he promises to take them himself.
Very real risk of this going in reverse: people building inaccessible websites to prevent AI use.
> a model is not software
When does code become software?
That app really sucks. In fact the app that mobile sites want you to download is almost always so bad that it should be required by law to have a STEAMING PILE OF POO EMOJI on the UI element that nags you to download it.
> The great injustice is very much me paying however much per pound of peaches when the supply is so great that they should be much cheaper.
But its not, because the supply and competing demands for motor fuel and all the other things that are required between the orchard are involved, not just the supply of peaches at the orchard.
That says a lot more about them than it did about you.
Framing this as needing "consent" is deeply misguided. It's as silly as claiming that Microsoft Word installed an English language spellcheck dictionary without your consent. It's just part of the software. You consented to installing the software and having it autoupdate. That covers it.
Now we can argue whether or not it's an appropriate amount of disk space or bandwidth to use, but that's just a reasonable practical discussion to have. Framing it around consent is unnecessarily inflammatory and makes it harder to have a discussion, not easier.
IBM also infamously patented the XOR cursor.
> I would guess that using the tab key in this way was part of a patent they were pursuing and Microsoft's use would show this to be 'obvious' and thus not patentable.
IBM insisting it not to be tab wouldn’t make sense. Microsoft was working for them and the programs should adhere to the CUA (Common User Access) standard.
Here's a real IBM 3270 keyboard.[1] Note the "Next field" key on the left, and the matching "Previous field" key on the right.
The IBM 3270 was a device for filling up forms. The mainframe sent the terminal a form with blanks, and the terminal let the user fill in the blanks. The terminal hardware prevented the user from overwriting the static parts of the form, and could apply some other form constraints, such as numeric fields. That was all done by the terminal. When the form was filled in, the user pressed ENTER, and the completed form was sent to the mainframe as one transaction. This approach let one mainframe service huge numbers of terminals. The user never experienced delays while typing and could type at full speed, often without looking.
PCs didn't have that usage model. The PC crowd was thinking "typewriter". One of the first terminals for home computers was called the "TV Typewriter".
Web forms do have that model, but with less consistency.
[1] https://sharktastica.co.uk/resources/images/model_bs/themk_1...
Having worked at IBM, I would guess that using the tab key in this way was part of a patent they were pursuing and Microsoft's use would show this to be 'obvious' and thus not patentable. But that is just a guess.
In the 80's IBM had a whole class of high level technical people called "Systems Engineers" whose entire job description was to opine on the merits of any given system. Not write systems, not debug them, and certainly not to explain them, it was simply to opine "you're doing it wrong."
In case you missed it, recent Rancher Desktop versions also went through this.
I feel like "AI didn't delete your database, you did" is all about who has accountability, though.
Claude does this too, with the Chrome extension.
It breaks like 80% of the time for me, and it's incredibly slow. Having it use Playwright (bonus: can test in FF/Saf too) was a big improvement.
> I don’t understand why Rust even has panics if its primary goal is safety.
Rust's goal is memory safety. Panics are perfectly memory safe.
MTP support is being addedto llama.cpp, at least for the Qwen models ( https://github.com/ggml-org/llama.cpp/pull/20533) and I'd imagine Gemma 4 will come soon.
The performance uplift on local/self-hosted models in both quality and speed has been amazing in the last few months.
A lot of people underestimate the effect of shared values and beliefs on their happiness, though, and would be better served by taking a lower-paying position to live in an area that fits their values better.
FWIW I just checked and AMC Theaters are not connected to the streaming network AMC+.
In my house we have two businesses [1][2] so that adds two cards. You may also have a card for medical expenses that can be reimbursed with a FSA/HSA or a prepaid debit card that you got as a gift, etc.
[1] don't tell Mr. Fox he's running a business
[2] ... and will probably be adding a third
> something that isn't what you want will run circles around you and eat your lunch
Yes, exactly. Spoken like a true biologist. It's not really surprising that there's a massive backlash against AI, introducing an unnatural predator into the ecosystem of humans. People don't want to be lunch.
https://www.perlmonks.org/index.pl?replies=1;node_id=437032;...
As befits a history of perl, it is full of random quotes and rambling discourses about history, but it has a lot of info in it.
That mostly works when you're single and without any hard ties.
Uprooting a well grown tree isn't easy.
> Most regulations (ADA, affirmative action, etc.) fall into the "not woke enough" category of model regulation
For sake of argument, let’s assume this is true. Those rules are still structured as laws, with boundaries and legal recourse. The precedent being set, that the President gets “voluntary” deference from private companies, is un-American and will be abused by the left.
We change how we value code: https://jerf.org/iri/post/2026/what_value_code_in_ai_era/
Short-short version, code will still be accruing value in proportion to how much of the real world it has encountered. The bottleneck on building valuable code will be how much real world there is to go around. As is so often the case, what may initially seem to kill SaaS will actually make them stronger as they end up with more exposure to the real world than some random guy's random AI code.
From 'the hacker did it' we have moved to 'the AI did it'. The problem set is roughly the same.
How is it not related to the subject?
>The same is true in a physical card wallet.
If only a digital UI didn't have the same skeuomorpic limitations a physical card has ...oh wait!
(And it's not true that the same issue is true in a physical card wallet. In a physical card, either you get a different design from the bank, or you can trivially write on it with a marker or add a sticker to differentiate it).
>An 80 year old with early onset challenges can work this wallet, pick a card, and then hold the phone to the reader at a store.
A, yes, the standard target group for iOS and the Wallet app in particular.
I swear, the arguments people make...
I feel like a broken record to be saying this again, but seeing Claude's writing everywhere grates. Maybe I'm preaching to the choir, but can we at least post articles that weren't so obviously Claude?
It's been a year or two since I drove a Tesla, but in FSD mode it insisted I at least touch the steering wheel regularly.
They were corporate evil from day 1. The rest was just PR slogans, and playing the good guy as long as you don't need to squeeze profits.
Just enter the location and time in question as part of creating the pass?
Add a loan fee and be done with it?
Has The Information broken any critical news about OpenAI? I never connected the dots around why I started finding it increasingly in worth paying for over the last year or two, but editorial bias feels correct.
I'll probably get some flack for this, but this is about as good of a layoff email as he could have sent.
* explains the reasons (financials, AI enablement)
* talks about what folks who are leaving get in detail (first) and thanks them
* talks to the folks who are staying
Layoffs are hard, no doubt, and I am not sure he's making the right choice. I see plenty of doubt about some of the actions in other comments that echoes mine. I certainly wouldn't want to have 15 direct reports and also ship production code regularly. But as CEO, it's his job to make these kinds of choices.
The proof is in the pudding as they say. We'll see how Coinbase does with this new orientation in the next year or so and that will determine if this was a wise or foolish move. Is there a flood of talent leaving? Major breaches? Business as usual with better than expected profits?
Time will tell.
Good call!
The first sentence of that paragraph — "Another attempt came during the American Civil War." — should have been the first sentence of the next paragraph.
"Use it or lose it" applies to the brain as well as your muscles.
This was well known before this paper.
Real commitment to the bit there.
It is still like that. The airline’s operations all depend on the flight crew being in the right place at the end of the flight, which is a higher priority than getting a passenger there.
https://en.wikipedia.org/wiki/2017_United_Express_passenger_...
I think some of what is offensive about the Electron situation is that way too many Electron apps are things that live in the tray or try to hijack the application lifecycle. So these are not just burning up memory but they are burning up memory for some trivial thing in the tray and making your machine slow to boot and complicating UI for the tray.
Are you suggesting that Microsoft and Amazon's sponsorship of Overture comes with an understanding that people who work on Overture will spend their time writing articles that "boost AI"?
Does "boosting AI" include opening an article with "Frontier models are really good at coding these days, much better than they are at other tasks"?
It always saddens me Intel GPUs are such fourth class citizens.
Why in the world do people keep shipping Chrome with their pseudo native applications?
Yes they are, the UI layer is mostly JS, outside the rendering and layout engines.
Not on my devices. Auto update has been abused so often now that it is an embarrassment to the industry. Auto update should be for bug fixes and security issues only.
Agree on title. Too dramatic.
The author seems to be obsessing about the overhead for trivial functions. He's bothered by overhead for states for "panicked" and "returned". That's not a big problem. Most useful async blocks are big enough that the overhead for the error cases disappears.
He may have a point about lack of inlining. But what tends to limit capacity for large numbers of activities is the state space required per activity.
Aren't you supposed to return a 0 status code when "yea done!" and some other status code when it wasn't done?