What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
If there is one term that has lost its value over the last three decades it is 'national security'. That gets trotted out for any number of reasons, the easiest and in my opinion most accurate interpretation is that it stands in for 'our desire to do this'.
If you want to keep software working on systems with a 9-bit byte or other weirdness, that's entirely on you. No one else needs or wants the extra complexity. Little endian is logical and won, big endian is backwards and lost for good reason. (Look at how arbitrary precision arithmetic is implemented on a BE system; chances are it's effectively LE anyway.)
It’s peak irony a company owned by the search overlord.
...whose search engine has itself become noticeably less of a search engine and more of a recommendation/sheeple-herding engine over time.
Check their comment history. This account is either trolling or just spouting talking points, it is getting pretty tiring. They can't even be bothered to properly cut-and-paste, see the missing 'I' at the beginning of the comment above.
> They're also competing to make as much profit as possible, which has effectively zero benefit for the public.
The end result is plenty of cheap stuff for people to buy. It's why free markets have full supermarkets and socialist markets have long lines.
Take the free market in software, for example. My entire software stack on my linux box cost me $0.
> Also have you checked the birth rate? Do you expect it to grow in a post-war context?
Yes, birth rates tend to go up when wars end.
> in the same sentence you say ‘for the first time’ and then ‘Plenty of precedent’. You either have no idea what ‘plenty’ means or you contradict yourself
This is baffling.
Women entering the workforce in the 1940s due to the war is the precedent. It happened throughout the developed world. We are now eighty years past that demonstration.
> The story of your country, which seems the only one you know, isn‘t as relevant as, for example, the history of ussr. We didn’t have a boomer generation.
There was indeed a birth rate spike in the 1940s in Russia.
https://www.statista.com/statistics/1038013/crude-birth-rate...
Unfortunately… Stalin.
Side note: I have dual citizenship, so I’m not sure which one of them is “the only one” I know.
Most of the world is below replacement rate (~2.1 TFR), the rest will get there in a decade or two. Educated, empowered women delay having children, have less children, or no children. Holds across both developed and developing countries.
https://www.pewresearch.org/short-reads/2025/08/15/5-facts-a...
https://www.sas.upenn.edu/~jesusfv/Slides_London.pdf
https://www.visualcapitalist.com/cp/mapped-countries-by-fert...
When I apply for a job, I use data to figure out the highest salary the company will accept.
The tl;dr is basically to stay with Win32 and ignore all the new and shiny.
That AI image at the end was more amusing than informative. Almost lost it at "Win15" and "Chrondum + frade.js".
Opposing viewpoint: discovered AI music without even realising it was AI, and now pleased with getting effectively infinite free entertainment. Just like human-made music, there's a lot of bad and mediocre, but occasionally great music. It's not like humans were creating in a void either --- everything is a derivative work.
Nothing against this project, it's been the case since forever that you could get better quality responses by simple telling your LLM to be brief and to the point, to ask salient questions rather than reflexively affirm, and eschew cliches and faddish writing styles.
> AI-generated content should have the same amount of copyright as prompt texts.
That's about right currently.
https://www.reuters.com/legal/government/us-supreme-court-de...
"Plaintiff Stephen Thaler had appealed to the justices after lower courts upheld a U.S. Copyright Office decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection because it did not have a human creator."
Why would the retirees want to be put back to work?
Why would the students want to have to do two full-time tasks at once?
Why would the homemakers want to add another full-time task?
Why would the people with cancer want to have to work from their hospital bed?
There's more to life than work. Get a hobby! Hope and purpose doesn't have to come from menial labor.
> I absolutely despise Musk at the same time for being absolute buffoon
Buffoonery is harmless, why despise him for that?
> From their perspective, gambling on a new managed-code framework had produced the most embarrassing failure in the company’s history
Most embarrassing failure in the company’s history that far.
I have a gentle rule, which is "if you can do it in one place, it is probably possible to do it in a second". The Swiss are not a separate species.
All connections to the Internet are at some level "shared", except perhaps if you get a direct connection to one of the core routers. As others have mentioned, this is in a dense area and much closer to being in a LAN environment.
The other point that I'd like to bring up is how useful is a 25G connection to your local demarcation point if your speeds to most sites will be far lower in practice because the Internet isn't circuit-switched.
Care to give a rebuttal?
'Way back when the universe was formed, the first stack frames came into existence. Nobody truly knows who mapped them there, though among the well-learned, whispers of a great kernel are heard, a certain "Linux".'
That was 1970-01-01T00:00Z right?
I don't think it's been possible to feed the UK domestically since before WW2.
I note you put the word "social" in there; very little social housing is being built, it's mostly private. Agrivoltaics are also possible, but of course everyone would rather do the politics of emotions ("hate farmers") than discuss the issues. Such as how we grow enough electricity, too.
As Peter Theil literally said, "Competition is for losers."
I am quite impresses it still is.
So much better. Hard to quantify, but even the small Gemma 4 models have that feels-like-ChatGPT magic that Apple's models are lacking.
> or in the cloud but way more expensive then it is today.
Why? It's widely understood that the big players are making profit on inference. The only reason they still have losses is because training is so expensive, but you need to do that no matter whether the models are running in the cloud or on your device.
If you think about it, it's always going to be cheaper and more energy-efficient to have dedicated cloud hardware to run models. Running them on your phone, even if possible, is just going to suck up your battery life.
> its not science. Its a strategically costly land grab
Step away from your screens. Framing everything exclusively in these hard terms isn’t healthy (or true).
> None of them were removed from office.
Correct. But that's not because they weren't impeached.
Impeachment is part of the process; three presidents have been impeached, Trump twice. Then comes the trial, and conviction/acquittal.
Complete nonsense, easily debunked. You should be embarrassed to post this.
Cool influence value discovery tool!
Roseburia inulinivorans probiotic when? Probably add it to the premade protein shakes and mix specific to those building muscle.
What a doublespeak title.
"Reaffirming our commitment to mass surveillance"
That's more like it.
I have no idea if that claim is true, but what I did love about visiting Finland was the even the small apartment I rented had a sauna in it! It seems like it's a non-negotiable for even the smallest accommodations.
There's a whole spectrum between "full AI slop" and "no AI usage". This article is far towards the former.
Yes, we need to.
If I'm meeting someone for drinks and then an emergency happens, I kind of want to know rather than waiting around for 45 minutes and then giving up.
From the first line of the post:
> Last week, I wrote a tail-call interpreter using the become keyword, which was recently added to nightly Rust (seven months ago is recent, right?).
They were ahead of nation states, but China has already closed the gap (wrt reusable vehicles). Musk squandered the first mover advantages of Tesla and SpaceX for individual power and wealth instead of the long term success of those enterprises. And so, it’s not unreasonable for folks to say Tesla and SpaceX were just tools for a grifter to become the wealthiest person in the world; that’s what he was optimizing for, based on the evidence. Musk cares about power, control, and himself, broadly speaking.
Does it matter in the long run? Probably not. Musk will be like Gates with more wealth; speaking circuit while others build at scale (China is ~1/3 of global manufacturing capacity).
The churn would have been much worse if Microsoft was rolling out successful GUI framework after GUI framework. As it is you can still write a Win32 app if that pleases you, or still write .NET (and damn that runtime download!)
Microsoft has bought into ‘make a web app’ since 1988, they introduced AJAX, they got flexbox and grid into CSS and numerous HTML 5 features to support application UIs. They ‘frikin bought npm!. I use Windows every day but I almost exclusively develop cross-platform systems based on the WWW, Java, Python, etc. Whenever I have developed with .NET it has been for a cross-platform front-end like Silverlight or Unity/itch.io.
I can’t say I have a desire to make a native Windows GUI app when I could make a web app: like if it worth doing from my computer isn’t it worth doing it on my iPad from anywhere with Tailscale? For all the complaints about modern JavaScript it gives you the pieces to make a very pleasant world in terms of DX and UX and you certainly don’t need to ship an Electron runtime for many applications.
Most directly, if somebody replies to your comment and you get a notification about that and it results in an ad impression: ka-Ching!
> but it was arguably the Christian value system which forged the government and institutions that made these achievements possible.
Many of the founders were specifically anti-Christian. They were deists, and believed in a higher power, but specifically rejected the idea of a divine intervention of God or Jesus.
Christians do not own the idea of being nice to others and trusting others.
Literally every VC funded consumer product has switched from a "growth at all costs" phase to a "Now we hike prices, make money, and generally enshittify" phase, and tons of those companies are still around (e.g. Uber), so I'm not sure why anyone thinks it would be much different for AI.
They aren't a monopoly, hence why.
Checks and balances mean nothing when the same party controls house, senate, president, and supreme court.
The Beta Site is at https://beta.stackoverflow.com
To me, it looks like more like Digg from the old days.
>The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.
He'll get things (papers, code, etc) which he can't evaluate. And the next round of agents will be trained on the slop produced by the previous ones. Both successive Bob's and successive agents will have less understanding.
Thanks for the pointer, https://news.ycombinator.com/item?id=47649354
edit: looks like there are affordable managed hosting providers for keycloak.
Once upon a time at Google: The year was 2013, and I'd been selected to be among the first 8,000 people to get Google Glass. I had to go to Google HQ in NYC from my home in Virginia to get it and be instructed 1:1 on how to use it. I was given a toll-free phone number to call for support by a Glass expert, available 24/7/365.
Not only did they answer immediately whenever I had even the smallest problem or question: I twice broke my Glass, and each time I'd call the support number to ask for a replacement.
Google's policy was that no matter how you broke it or how many times it happened, they'd replace it free. They'd immediately send a box to return the broken device (prepaid) and a couple days later a brand new Glass would arrive.
Like I said, once upon a time....
I actually heard a podcast once, and they asked a similar question. "Why does it make sense to grow alfalfa in the desert?"
The answer was surprising at first, but not really:
- You can grow a lot of it because there's a lot of sun
- You can get 8 or so harvests when in a normal climate you might get two
- There are no bugs in the desert that will eat your lettuce
Here's the 2023 episode if you want to listen: https://podcasts.apple.com/us/podcast/an-arizona-farmer-on-h...
“The bottleneck is not political. It is geological and hydrographic.”
… and then …
“The binding constraint on Hormuz was never a minefield or insurance. It is the US Navy’s willingness and ability to reopen it.”
Note I believe this one because of the amount of elbow grease that went into it: 250 hours! Based on smaller projects I’ve done I’d say this post is a good model for what a significant AI-assisted systems programming project looks like.
> Large transformer making is a craft
Dumb question: why can’t we mass manufacture smaller transformers and join them up?
Where are you pulling locations from? Many small towns aren’t populating.
> they're being sued by a handful of customers
To be fair, they’re being sued by customers who were marketed memberships.
I am not believing that the AT protocol really changes anything. I mean for a lot of people “It lives in the open social graph” sounds about the same as “shouting into a void.”
Mastodon has been growing on me over time, some of it is that I’ve learned to accept the things I don’t like about it, some of it is that I have a good friend graph, but a lot of it is that the complexity scares away a lot of the “muggles” and holds off the Eternal September. That is, it is so bad that it is good; so certain kinds of subcultures really thrive.
Precisely. The first 10 rungs of the ladder will be removed, but we still expect you to be able to get to the roof. The AI won't get you there and you won't have the knowledge you'd normally gain on those first 10 rungs to help you move past #10.
I’m a little skeptical that it is easy as you say it is. Particularly when you are dealing with something as bad as the current toolchains, there are a number of barriers (say 10) and maybe the LLM clears some of them (say 6) and you still have a lot of details to get bogged down in.
In the software field you certainly hear things like “I have 20 agents running in my gas town and last night they they coded something that will put Google out of business” and also “all I get is unmaintainable trash.” My own experience is “your results will vary”, like I’ve been very impressed sometimes and other times the agent just couldn’t do it.
So I’d expect FPGA to be the same, helpful to some extent but not a revolution.
I also think you are right to say FPGA applications are scarce. FPGA are expensive for what you get, strong compared to other architectures when latency really matters, not so strong for throughput. I think about the new-retrocomputer space where display controllers are tricky (conceptually simple but a lot of parts) and an FPGA is an obvious option but people are more likely to use an ESP32 more because it is cheap than because it is easy to program.
There is a study that shows that what the model is doing behind the scenes in those cases is a lot more than just outputting those tokens.
For an LLM, tokens are thought. They have no ability to think, by whatever definition of that word you like, without outputting something. The token only represents a tiny fraction of the internal state changes made when a token is output.
Clearly there is an optimal for each task (not necessarily a global one) and a concrete model for a given task can be arbitrarily far from it. But you'd need to test it out for each case, not just assume that "less tokens = more better". You can be forcing your model to be dumber without realizing it if you're not testing.
I see this fallacy being committed a lot these days. "Because LLMs, you will no longer need a skill you don't need any more, but which you used to need, and handwaves that's bad".
Academia doesn't want to produce astrophysics (or any field) scientists just so the people who became scientists can feel warm and fuzzy inside when looking at the stars, it wants to produce scientists who can produce useful results. Bob produced a useful result with the help of an agent, and learned how to do that, so Bob had, for all intents and purposes, the exact same output as Alice.
Well, unless you're saying that astrophysics as a field literally does not matter at all, no matter what results it produces, in which case, why are we bothering with it at all?
The click bait version adds “and it’s electrifying”.
Important note: this is not about Talk Like a Pirate Day.
The subscription model for an app I'm running on my desktop is taking the piss a bit. I'm fine paying for stuff I use, but I miss buying apps once and either using them as much as I want, or paying to upgrade.
Now I'm both locked in to paying every month, and can't keep using the app as it was when I bought it, because it auto updates and most apps will invariably have a server component that will quickly become incompatible with old app versions.
I hate the direction of "we'll force you to update even if you don't like the new direction, and we'll force you to pay for the privilege", so I'm voting with my wallet on this.
Turns out nobody wants a closed-down headset controlled by Meta, no matter how slick it is. I do think we'd have seen an explosion of cool apps if it were open.
Here's hoping the Steam one fulfills the dream.
I've been coding every easter since 96 and I don't plan to stop now! I balance family time and coding, as usual.
Oh boy. Someone didn't get the memo that for LLMs, tokens are units of thinking. I.e. whatever feat of computation needs to happen to produce results you seek, it needs to fit in the tokens the LLM produces. Being a finite system, there's only so much computation the LLM internal structure can do per token, so the more you force the model to be concise, the more difficult the task becomes for it - worst case, you can guarantee not to get a good answer because it requires more computation than possible with the tokens produced.
I.e. by demanding the model to be concise, you're literally making it dumber.
(Separating out "chain of thought" into "thinking mode" and removing user control over it definitely helped with this problem.)
The difference obviously being, his way you own the memories; in what's currently deployed, it's the platform that owns them.
Depends if the AI masters also own said proprietary technology.
While UNIX is famous for everything is a file, in reality this concept is only true in Plan 9, in UNIX IPC not everything is a file.
Sorry, I couldn't resist. :)
ISO7816 (smartcard) has existed for nearly 4 decades as the standard secure identity card, widely used by the banking industry among others. Very unintrusive and not hostile beyond needing to carry a little chip. If governments want a national ID, they could just give everyone one of those.
> Write too many color emojis in a row on a YouTube livestream chat
> Get banned from society for life
I've heard it phrased thus: The "G" in "GPU" stands for "general-purpose".
Incidentally this is the same driver that someone else used with an RTX 5060 Ti: https://www.tomshardware.com/software/windows/enthusiast-ins...
It's good to see that the latest GPUs can still be used in "dumb framebuffer" mode, are mostly VGA-compatible, and have VESA VBE support. I suspect AMD / NVIDIA might still have some sort of DOS-based factory tooling when bringing up new GPUs for the first time. In sadder news, I've read that the latest Intel integrated GPUs no longer have a VBIOS and are UEFI-only; although it might only be a matter of time before someone vibe-codes (vibe-ports?) one based on those from an older model.
Then you can't take a Waymo any more.
The ridiculous amount of focus on this one individual vs the complete lack of attention for the thousands of Iranians already dead is very disturbing.
I just start AppImages from the command line and put them in my /home/$username/bin that seems to take care of most of the annoying edge cases. Snaps are ridiculously hostile abusing the mount system and polluting all kinds of places where they have no business going, I've completely purged the whole snap subsystem from my machine. Flatpak I've managed to avoid so far.
The joke is "expect one day of downtime every leap year", but in practice it seems a bit more than that.
> Finding investors willing to take less than 5% return
If it’s safe I can leverage it.
> get on a 10 or 30 year federal bond with more risk associated with it
5% is 10 bps over the 20 year. Not a lot. But I picked a round number. Point is why couldn’t it be undercut? If it can’t, easily, it’s correctly priced.
Thanks for the Tintin reference! I immediately knew what you mentioned as that panel flashed in my mind.
I don't know what it is about the Tintin comics, but for many of the fans, the panels are so strongly memorable even after so many years, is quite amazing.
The paper touches on that:
> The ARR shows the extent by which total predicted COVID-19 deaths exceeded officially reported COVID-19 deaths during the period. A limited number of counties had ARRs < 1, which suggests that there were more officially reported COVID-19 deaths than total predicted COVID-19 deaths. One reason that a county could have an ARR < 1 is if death certifiers recorded people as dying from COVID-19 when they had COVID-19 but actually died from another unrelated cause.
> I hated writing software this way. Forget the output for a moment; the process was excruciating. Most of my time was spent reading proposed code changes and pressing the 1 key to accept the changes, which I almost always did. [...]
That's why they hated it. Approving every change is the most frustrating way of using these tools.
I genuinely think that one of the biggest differences between people who enjoy coding agents and people who hate them is whether or not they run in YOLO mode (aka dangerously-skip-permissions). YOLO mode feels like a whole different product.
I get the desire not to do that because you want to verify everything they do, but you can still do that by reviewing the code later on without the pain of step-by-step approvals.
Maybe AI will be what finally makes people realise how absurd the concept of Imaginary Property is.
I don't think you'll be able to provide evidence for the uneven distribution of IQ across nationalities.
The thing that really confuses me about this is that it has very real negative consequences. I cannot have a conversation about Copilot!
If someone says "I used Copilot to..." or "Copilot is great for..." or "Copilot sucks because..." they haven't communicated any useful information to me, because I have no idea what product they are talking about.
And if I ask them (which I always do) they still have trouble describing the product, because Microsoft give them no help at all. How DO you explain that something was the Copilot thing that's a feature on GitHub.com that shows up in the web interface there, as opposed to whatever the heck other forms of GitHub Copilot.
(Amusingly there are 15 "GitHub Copilot..." products listed on the linked website and I can't tell which if any of those 15 corresponds to the chat UI on the logged in GitHub.com homepage, or that's available in the "Agents" tab in a repository.)
Surely Microsoft feel this pain all the time? Bug reports in "Copilot" must be almost impossible to interpret.
Very cool! However, it took me a while to figure out how this was supposed to be used.
For others:
On desktop, at least, you need to click and drag up/down on the left-hand control that says "swipe" with two arrows.
Or click "Autoplay".
laszlokorte -- can I suggest that the up/down icons should also be clickable/holdable? Since they're icons, they look like buttons, not a "swipe area". And also, maybe default to having autoplay on (but still with the controls visible)? Because it was not clear to me, at first, that the whole point of the site is infinite zoom.
Like did Grok generate that on its own two months ago? Did you tell it generate it? What happened?
Plea agreement: https://www.justice.gov/atr/media/1434041/dl?inline
If it’s not broken, don’t fix it.
3270 is a very efficient form of distributed computing where terminals and their controllers control a lot of state and the mainframe is in charge of the more important things. It was a browser from the 60’s, mostly 70’s.
> It says we're willing to give rich and powerful people a pass just because they make overtures towards something we care about.
This encapsulates the entire moral bankruptcy of "the Epstein class" so perfectly. I highly recommend reading the series about the Epstein class by Anand Giridharadas (Giridharadas didn't actually coin the term "Epstein class", apparently that was Ro Khanna, but he really was the first to popularize and clearly define it).
Nothing, just keep in mind they're still unscrupulous.
To the extent that most of this is going into AI and people are having their ChatGPT and Gemini requests throttled because of lack of capacity, they need it now.
AI is dramatically more compute- and memory-hungry than past computing models, so if that's what people are using, it's going to require a large build-out of computing capacity to support the requests that are being made right now.
Because there are customers there
Isn't X usually the original source these days?
I’ve had Claude Code diagnose bugs in a compiler we wrote together by using gdb and objdump to examine binaries it produces. We don’t have DWARF support yet so it is just examining the binary. That’s not security work, but it’s adjacent to the sorts of skills you’re talking about. The binaries are way smaller than real programs, though.
Or that the pinnacle of success and status is having a partner staying at home raising children. It is not.
Next step in Germany - from mid-2027, a fitness test for 18 year olds.
I actually like when that happens. Like when people "correct" me about how reddit works. I appreciate that we still focus on the content and not who is saying it.
My friends who work at Meta said that they bought 100s of copies of the book and were passing it around to make sure everyone read it.
The US spends more per capita, and even as a share of GDP, on healthcare out of public funds than some advanced industrialized states that have universal systems, as well as spending even more on healthcare out of private funds than out of public funds. If we didn’t have a system which expended vast quantities of additional resources in order to assure that a substantial subset of the population is denied needed healthcare and instead just provided the needed healthcare, we could fund all those other things without cutting back on the war crimes, crimes against humanity, and crimes against peace, either direct or those that we subsidize that are executed by other regimes.
We still should cut down (ideally to zero) on war crimes, crimes against humanity, and crimes against peace, but the reason is because those things are unqualified evil on their own, not because doing so is necessary to fund healthcare and other priorities, which it very much is not.