What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
For when you need to store a copy of the internet, and have been granted immunity for your copy of Anna's Archive.
A long time ago I did that to make Canonical's Launchpad easier to read - mostly making tables look nicer and so on. I was really nice. I saw similar initiatives at Workday as well - browser plugins that added extra functionality to the development instances of the application.
While it incurs a programming issue, the microcontroller will generally be more stable, less temperature sensitive, and consume less power.
LLMs are accelerants. They enable people to do patent and copyright infringement at a much larger scale. As we know from previous examples, if you break the law enough as a company eventually they have to let you keep doing it.
It was notable that in Minneapolis enough people were doing this kind of thing that ICE were seriously impaired, and had to resort to escalation and shooting Americans in the street.
Yes, Microsoft suffers from schizophrenic management, it is easier for externals to talk between teams than internal teams themselves, there are quite a few stories on the matter.
Apple really doesn't care how apps are used, Radar issues go untouched for several releases.
EDIT: missing "management".
Yes, and all the paper straws in the world isn't going to save it, between wars, attacks on critical infrastructure and AI overlords.
Agreed, which is why my stance on the matter at least on what I have control over, is either GPL/LGPL, or commercial license.
"Be entitled to whatever one is willing to give upstream" is my motto.
3 DOF per leg, so it needs 12 motors and controllers. Getting that under $1000 is nice.
Here's the US$18 motor: [1] Those things are getting really cheap. He did have to rewind it, though, for more turns with thinner wire. The manufacturer mentions that you can order with "custom Kv", which means you might be able to get a different winding from the factory if you order a reasonable quantity. Especially if you tell them that makes them "robot motors".
Motor overheating might be a problem. The dog, just standing, has its motors stalled under load, converting power to heat. Drones don't do that. Temperature feedback would help if this thing has to operate for extended periods. Remember yesterday's article on humanoid robots and their cooling problems.
The motor controller is nice too, and cheap at $49. Needed fixes to the firmware, but that's not surprising at the price. High performance motor controllers used to cost about $1000.
Repurposed drone technology has done wonders for legged robots. We're not quite at the point where limb drive hardware is off the shelf, but it's way better than it used to be.
[1] https://www.xntyi.com/tyi-5008-kv335/kv400-high-speed-brushl...
> But these LLMs are like Happy Gilmore. They get to the green in one shot then they orbit the hole with an extremely dubious short game.
Except that he got good at his short game by the end. LLMs will get there sooner than we think.
True, but contrary to the fruity models, some of these are upgradedable.
My Asus netbook started with basic configuration and was maximised during its lifetime, just like any PC desktop.
The transfer rates limit how much each chip can be active at any given time, so a heat-aware writing allocator can pick the least active blocks for the next writes and distribute the heat accordingly. Even if it’s not heat-aware, the tendency will be that the writes will be distributed over as many chips as there are, and so will be the heat generated.
Now, I would LOVE to see this much SLC flash on a direct to bus attachment setting.
QLC NAND
The datasheet shows 3GB/s sequential write, which for 245.76TB means writing the whole drive takes around 22h45m. Odd that the endurance is specified as "1.0 SDWPD", which is almost meaningless since the drive takes roughly that long to write at full speed.
At scale, 1.9 times more energy is required for an HDD deployment
...but those HDDs are going to hold data for far more than twice as long. It's especially infuriating to see such secrecy and vagueness around the real endurance/retention characteristics for SSDs as expensive as these.
On the other hand, 60TB of SLC for the same price would probably be a great deal.
Trump also fired the Immigration Detention Ombudsman.[1]
[1] https://www.independent.co.uk/news/world/americas/us-politic...
Which is why exclusives matter, you don't go to Steam if the game is only on one of Switch, PlayStation, XBox, XBox PC, Android, iOS, Apple Arcade,.....
Oh! Dear Lord. I still want to hear my Indian friends speak Indian to me during Support Calls. These days, I’m hearing American accents trying to calm me down over my complaints on that excess masala in the idli-dosa-pav-bhaji butteerr-chicken combo in the El Camino Eatery in the outskirts of Jhalandar.
Not the OP, but it might be that AI isn't as good at systems programming as it is at other domains, or it might be that you're using it differently than I am. I don't know which one it is (maybe AI just isn't good at writing the language you work with).
For things like web frontents/backends, though, it works beautifully. I ship things in days that would take me weeks to write by hand, and I'm very fast at writing things by hand. The AI also ships many fewer bugs than our average senior programmer, though maybe not fewer bugs than our staff programmers.
I just created (yesterday) a product tour I'm pretty proud of:
It's a writing reviewer app, and the landing page is the product. It's literally a document with a critique. You can write in it, use the editor, even delete the whole page.
I always skip tours, but I think this kind of thing (if your product can support it) is much better. Then again, this isn't so much a "you've logged in, now let us teach you how to use this product" as a "welcome, here's what this product does".
You just described 95% of the parts of all software, especially in this era.
Yes, that's the problem
> the LLMs will ship code the LLMs understand, and whether any human specifically understands any particular part will mostly not matter.
I find this particularly funny. There were more than a couple Star Trek Episodes where some alien planet depends on some advanced AI or other technology that they no longer understand, and it turns out the AI is actually slowly killing them, making them sterile, etc. (e.g. https://en.wikipedia.org/wiki/When_the_Bough_Breaks_(Star_Tr... )
Sure, Star Trek is fiction, but "humans rely on a technology that they forget how to make" is a pretty recurrent theme in human history. The FOGBANK saga was pretty recent: https://en.wikipedia.org/wiki/Fogbank
It just amazes me that people think "Sure, this AI generated code is kinda broken now, but all we need is just more AI code to fix it at some unknowable point in the future because humans won't be able to understand it!"
Not too long ago, someone submitted an AI demo to HN that resulted in a 3.1GB download upon visiting the page: https://news.ycombinator.com/item?id=47823460
It reminds me of the "dialup warnings" common 2 decades ago on huge pages (often containing many images). Yes, bandwidth and storage has gotten cheaper, but the unwanted waste should still be called out. I'm not even anti-AI, having waited several hours recently to get some local models to experiment with, but that's because I wanted to and made the decision to use that bandwidth.
>That is exactly where the disagreement stems from. That app is a draft version that might work for a couple people. It won't scale. It won't be secure. It won't handle edge cases. It won't be flexible enough to iterate based on customer feedback.
As if startup code doesn't have the same issues pre-AI? And still they get to billions of valuations with such code.
They can always pay some beefier consultants when they absolutely have to, for scaling it up or hardening it.
That "it won't be flexible enough to iterate based on customer feedback" is more wishful thinking. It would be code like any other code, following some patterns. In fact, the architecture can be fine tuned by the human in the loop anyway - they just wont be needed 5 more humans to assist them to code it.
>Because if all they do is: What humans do, just faster... cool, useful, but not worth all the hype.
That's literally what automation in any field is. Why should be something more, as if this huge breakthrough is already taken for granted within a few years of being available?
It would be useful if this site included a human-readable explanation of what terms like "Disposition: Diversion" compared to "Disposition: 871PC/No Sufficient Cause" meant.
Or a clear definition of what the question "Which released charge sounds worse?" means.
What would I need to rant about? Sometimes the world does my ranting for me.
> was surprising. Goes against the idea that deregulation allows companies to squeeze consumers and earn excess profits.
That's because this assertion is economically illiterate. Deregulation can lead to increased profits where otherwise companies have monopoly power. But often, the regulation was there in the first place to ensure that companies had sufficient profit to invest in expensive infrastructure. (E.g. railroads).
He was always a tool outside his narrow field. Just got a lot of fans for saying basic atheist 101 as if he invented it (and even that, naively).
Unless Apple comes up with a novel memory, which I wouldn’t put beyond Cupertino, it makes more sense to participate in economies of scale.
> She also successfully applied for an outdoor seating permit through the Police e-service, which didn’t require BankID. Her first submission included a sketch she had generated herself, despite having never seen the street outside the café. Unsurprisingly, the Police sent it back for revision. [...]
> When she makes a mistake, she often sends multiple emails to suppliers with the subject “EMERGENCY” to cancel or change the order.
I really don't like these research projects which waste the time of real human beings who haven't opted into the experiment.
>Memory designs are pretty entrenched with the various patents involved...
Can't be any more entrenched than CPUs, GPUs, and broadband chips, which Apple still designs.
Doesn't matter. He did it through the use of AI, and AI, despite explicitly told otherwise, deleted the database.
Both he should learned his lesson AND AI should not be trusted.
> Of course, people who never approached agriculture will be appalled at this, and call it great injustice.
Uneducated rice farmers in Bangladesh would understand the problem better than the people complaining about this.
> for food security
They overproduce for votes. Countries without farmer blocks swinging elections stockpile non-perishables for food security.
As an owner of an apple tree: that's great for about two months, but I don't have commercial quantities of cold storage.
He can have my psych meds if he promises to take them himself.
Very real risk of this going in reverse: people building inaccessible websites to prevent AI use.
> a model is not software
When does code become software?
That app really sucks. In fact the app that mobile sites want you to download is almost always so bad that it should be required by law to have a STEAMING PILE OF POO EMOJI on the UI element that nags you to download it.
> The great injustice is very much me paying however much per pound of peaches when the supply is so great that they should be much cheaper.
But its not, because the supply and competing demands for motor fuel and all the other things that are required between the orchard are involved, not just the supply of peaches at the orchard.
That says a lot more about them than it did about you.
Framing this as needing "consent" is deeply misguided. It's as silly as claiming that Microsoft Word installed an English language spellcheck dictionary without your consent. It's just part of the software. You consented to installing the software and having it autoupdate. That covers it.
Now we can argue whether or not it's an appropriate amount of disk space or bandwidth to use, but that's just a reasonable practical discussion to have. Framing it around consent is unnecessarily inflammatory and makes it harder to have a discussion, not easier.
IBM also infamously patented the XOR cursor.
> I would guess that using the tab key in this way was part of a patent they were pursuing and Microsoft's use would show this to be 'obvious' and thus not patentable.
IBM insisting it not to be tab wouldn’t make sense. Microsoft was working for them and the programs should adhere to the CUA (Common User Access) standard.
Here's a real IBM 3270 keyboard.[1] Note the "Next field" key on the left, and the matching "Previous field" key on the right.
The IBM 3270 was a device for filling up forms. The mainframe sent the terminal a form with blanks, and the terminal let the user fill in the blanks. The terminal hardware prevented the user from overwriting the static parts of the form, and could apply some other form constraints, such as numeric fields. That was all done by the terminal. When the form was filled in, the user pressed ENTER, and the completed form was sent to the mainframe as one transaction. This approach let one mainframe service huge numbers of terminals. The user never experienced delays while typing and could type at full speed, often without looking.
PCs didn't have that usage model. The PC crowd was thinking "typewriter". One of the first terminals for home computers was called the "TV Typewriter".
Web forms do have that model, but with less consistency.
[1] https://sharktastica.co.uk/resources/images/model_bs/themk_1...
Having worked at IBM, I would guess that using the tab key in this way was part of a patent they were pursuing and Microsoft's use would show this to be 'obvious' and thus not patentable. But that is just a guess.
In the 80's IBM had a whole class of high level technical people called "Systems Engineers" whose entire job description was to opine on the merits of any given system. Not write systems, not debug them, and certainly not to explain them, it was simply to opine "you're doing it wrong."
In case you missed it, recent Rancher Desktop versions also went through this.
I feel like "AI didn't delete your database, you did" is all about who has accountability, though.
Claude does this too, with the Chrome extension.
It breaks like 80% of the time for me, and it's incredibly slow. Having it use Playwright (bonus: can test in FF/Saf too) was a big improvement.
> I don’t understand why Rust even has panics if its primary goal is safety.
Rust's goal is memory safety. Panics are perfectly memory safe.
MTP support is being addedto llama.cpp, at least for the Qwen models ( https://github.com/ggml-org/llama.cpp/pull/20533) and I'd imagine Gemma 4 will come soon.
The performance uplift on local/self-hosted models in both quality and speed has been amazing in the last few months.
A lot of people underestimate the effect of shared values and beliefs on their happiness, though, and would be better served by taking a lower-paying position to live in an area that fits their values better.
FWIW I just checked and AMC Theaters are not connected to the streaming network AMC+.
In my house we have two businesses [1][2] so that adds two cards. You may also have a card for medical expenses that can be reimbursed with a FSA/HSA or a prepaid debit card that you got as a gift, etc.
[1] don't tell Mr. Fox he's running a business
[2] ... and will probably be adding a third
https://www.perlmonks.org/index.pl?replies=1;node_id=437032;...
As befits a history of perl, it is full of random quotes and rambling discourses about history, but it has a lot of info in it.
That mostly works when you're single and without any hard ties.
Uprooting a well grown tree isn't easy.
Have I got a book for you: https://en.wikipedia.org/wiki/The_Unaccountability_Machine
Not actually about technology at all, but about organizational structure.
> Most regulations (ADA, affirmative action, etc.) fall into the "not woke enough" category of model regulation
For sake of argument, let’s assume this is true. Those rules are still structured as laws, with boundaries and legal recourse. The precedent being set, that the President gets “voluntary” deference from private companies, is un-American and will be abused by the left.
"Clone this pass" as a service!
We change how we value code: https://jerf.org/iri/post/2026/what_value_code_in_ai_era/
Short-short version, code will still be accruing value in proportion to how much of the real world it has encountered. The bottleneck on building valuable code will be how much real world there is to go around. As is so often the case, what may initially seem to kill SaaS will actually make them stronger as they end up with more exposure to the real world than some random guy's random AI code.
From 'the hacker did it' we have moved to 'the AI did it'. The problem set is roughly the same.
How is it not related to the subject?
>The same is true in a physical card wallet.
If only a digital UI didn't have the same skeuomorpic limitations a physical card has ...oh wait!
(And it's not true that the same issue is true in a physical card wallet. In a physical card, either you get a different design from the bank, or you can trivially write on it with a marker or add a sticker to differentiate it).
>An 80 year old with early onset challenges can work this wallet, pick a card, and then hold the phone to the reader at a store.
A, yes, the standard target group for iOS and the Wallet app in particular.
I swear, the arguments people make...
I feel like a broken record to be saying this again, but seeing Claude's writing everywhere grates. Maybe I'm preaching to the choir, but can we at least post articles that weren't so obviously Claude?
It's been a year or two since I drove a Tesla, but in FSD mode it insisted I at least touch the steering wheel regularly.
They were corporate evil from day 1. The rest was just PR slogans, and playing the good guy as long as you don't need to squeeze profits.
Man, this is a dumb take even by HN standards.
OS thread overhead can be pretty substantial. Starting new threads on Windows is especially expensive.
Just enter the location and time in question as part of creating the pass?
Has The Information broken any critical news about OpenAI? I never connected the dots around why I started finding it increasingly in worth paying for over the last year or two, but editorial bias feels correct.
I'll probably get some flack for this, but this is about as good of a layoff email as he could have sent.
* explains the reasons (financials, AI enablement)
* talks about what folks who are leaving get in detail (first) and thanks them
* talks to the folks who are staying
Layoffs are hard, no doubt, and I am not sure he's making the right choice. I see plenty of doubt about some of the actions in other comments that echoes mine. I certainly wouldn't want to have 15 direct reports and also ship production code regularly. But as CEO, it's his job to make these kinds of choices.
The proof is in the pudding as they say. We'll see how Coinbase does with this new orientation in the next year or so and that will determine if this was a wise or foolish move. Is there a flood of talent leaving? Major breaches? Business as usual with better than expected profits?
Time will tell.
Good call!
The first sentence of that paragraph — "Another attempt came during the American Civil War." — should have been the first sentence of the next paragraph.
"Use it or lose it" applies to the brain as well as your muscles.
This was well known before this paper.
Real commitment to the bit there.
It is still like that. The airline’s operations all depend on the flight crew being in the right place at the end of the flight, which is a higher priority than getting a passenger there.
https://en.wikipedia.org/wiki/2017_United_Express_passenger_...
I think some of what is offensive about the Electron situation is that way too many Electron apps are things that live in the tray or try to hijack the application lifecycle. So these are not just burning up memory but they are burning up memory for some trivial thing in the tray and making your machine slow to boot and complicating UI for the tray.
Are you suggesting that Microsoft and Amazon's sponsorship of Overture comes with an understanding that people who work on Overture will spend their time writing articles that "boost AI"?
Does "boosting AI" include opening an article with "Frontier models are really good at coding these days, much better than they are at other tasks"?
It always saddens me Intel GPUs are such fourth class citizens.
Why in the world do people keep shipping Chrome with their pseudo native applications?
To the point it got replaced by std::function_ref() in C++26.
As European, I can tell that it depends on the kind of cinema, and country.
My experience, being discussed in another thread, is that only big commercial multiplex do it, many small cinemas with more alternative content, usually don't do assigned seats, only ticket reservations.
Not on my devices. Auto update has been abused so often now that it is an embarrassment to the industry. Auto update should be for bug fixes and security issues only.
Agree on title. Too dramatic.
The author seems to be obsessing about the overhead for trivial functions. He's bothered by overhead for states for "panicked" and "returned". That's not a big problem. Most useful async blocks are big enough that the overhead for the error cases disappears.
He may have a point about lack of inlining. But what tends to limit capacity for large numbers of activities is the state space required per activity.
> The book does what it says on the tin but it's more on persuasion methods and framing, which of course can be used for nefarious purposes.
An interesting result of reading those books is one starts to recognize when one is being manipulated.
Just the other day a door-to-door salesman appeared at my door, and he tried a number of classic sales techniques on me. He lacked, however, some accouterments that a legitimate salesman would have, so I had to be pretty firm in saying no.
It is also not supported, beyond people by sheer luck see their nick.
The treat model is that your container gets owned.
The password should only exist in the process memory for the few lines of code to open that database connection, and then wiped after you got the handle.
Ideally, homomorphic encryption should be used instead.
Yeah, but then you have about 5 minutes time to change it to whatever what you want.
The very first example that this study says is "false or unproven" uses ambiguous language at best:
> Animal protein is healthier than plant-based protein.
All commonly consumed sources of animal protein (meat, poultry, fish, eggs, dairy, etc.) are complete proteins, meaning they provide all essential amino acids. This is not true for all sources of plant-based protein. In addition, "protein" as is often used to indicate a part of a meal (I mean not just the technical definition of a chain of amino acids). Vegans are nearly always advised to supplement with B12 because good plant protein sources like legumes are poor sources of B12.
I understand what the study is getting at with the question, for as far as I am aware there are no studies that show that getting, say, 10 grams of complete protein from animals is any different from getting 10 grams of complete protein from plants. Still, given this question is easily ambiguous for valid definitions of "healthier", I find this study suspect, despite having no problem with the general idea that tons of people believe absolute batshit insane ideas about health and nutrition.
Or tune the engine correctly. Probably has an off-the-shelf "performance" carb that's set much richer than it should and a "full race" cam that only makes sense for a track car, giving horrible fuel economy and actually less low-end power.
My daily driver is roughly as old, has a 400 V8 with a 4-barrel, idles so quietly I've had passengers surprised that the engine was running, and gets around 20-25mpg if I resist the urge to open it up all the way.
Without disassembling and tracing the Intel Windows drivers (something I don’t feel like doing)
As someone who generally doesn't use AI in software development nor RE, this is one thing that I'd recommend trying one on to see what it can do: the problem is clearly defined and a solution is easily validated, and it's a problem you're not intersted in digging deeper yourself. The other comment here about 0000 and FFFF checksums seems like a good place to start.
A little more digging found this discussion from TODAY regarding what looks like a very similar bug in one of Intel's Linux NIC drivers: https://lkml.org/lkml/2026/5/4/1886
How well does that long translation prompt work?
... and that bug was spotted in the canary release, reported and fixed.
Sounds like responsible open source software development to me. That's what pre-releases are for.
r/wallstreetbets has been having an absolute field day with this interview, gotta admit it's pretty hilarious.
I guess wallstreetbets can giveth (given they're probably the primary reason Gamestock even still exists as an independent company today) and taketh away.
I understand this sentiment but don't know what to do with it. "Not even Schneier can be trusted" is "not even wrong". Schneier has very little to do with modern cryptography! But a long time ago, someone created a "Bruce Schneier facts" meme site, and now it's like an article of faith that he's a cryptography engineering expert. No, not so much, and I don't think he'd disagree.
He's a perfectly nice guy with a lot to say about information security and its intersection with public policy. But I think it's been plural decades since he basically declared himself outside of modern cryptography (you could call it at the point where he said he didn't "trust the math" of elliptic curves, which he left out of Practical Cryptography, over 26 years ago).
It's not so much that you should or shouldn't take "Applied Cryptography is bad" personally; rather: if you think Applied Cryptography is a useful reference or learning tool, it's pretty important to know that it is not.
SOC2 has never been about software resilience. You can create a set of attestations that will require you to present evidence to your auditors (who are ~accountants and will not know what the dotted quads of an IP address mean) about software quality, but there is no reason to do that and most organizations don't. SOC2 cares a great deal more about access management (in the "plotting on spreadsheet" sense) than it does about vulnerabilities.
My thing here is: you want to summon some kind of deus ex machina reason why the unpredictability (say) of agent-generated software will fail in the real world, but the concrete one you came up with fails to make that argument, pretty abruptly. Which makes me think the argument is less about the world as it is and more about the world as you'd hope it would be, if that makes sense.
It's definitely the sort of thing that Crowley from Good Omens would be working on.