What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
I think older people who have more patience are supposed to help. We moved to America when we were young so my mom had raise my brother and I by herself, which was very hard coming from somewhere people live in multi-generational households. She had very little patience for it. But she and my dad have way more patience for my kids. My mom lived with us for a year and then my wife’s mom lived with us for a year while our youngest was 2-3. Then we moved 10 minutes from my parents. My middle child kept getting ear infections so he went to my parents’ house every day for two years. These days my boys (4 and 7) go to my parents’ house every weekend.
I don’t think younger people are wired to be taking care of babies full time. I’d imagine in nature they’d be out hunting or gathering and our attention spans are wired for doing that.
The author of this post understands that private keys are never exchanged. Read it more carefully.
It is an article of faith on message boards that IQ tests are somehow unlawful in employment in the US because of Griggs. That's patently not the case. Plenty of large corporations with deep pockets do general cognitive testing (and even IQ testing) of incoming applicants. The companies that administer these tests have logo crawls, like every other product company, and the names are companies your mom will recognize.
Griggs says that you can't employ IQ tests to discriminate for jobs where the IQ threshold you established is immaterial to the job. For modern knowledge work, that's a trivial bar to clear. It is essentially the case that Griggs is a nonfactor in white collar employment.
My motivation for calling this out isn't that I'm OK with discriminating against Black people; quite the opposite, in fact. IQ tests aren't used more widely because they aren't effective at qualifying employees. IQ is an idee fixee among A Certain Part Of The Internet, and part of their pitch is that IQ tests are a secret cheat code that would enable hiring purely on merit, if only the woke wouldn't prevent it. No, false.
This seems to have happened about a year before "The IT Crowd" episode "Smoke and Mirrors" aired.
In that episode Moss, one of the IT denizens, goes to a TV studio where he is mistakenly put on a news program and interviewed about a war.
I wonder if they're related...
If I want a model I'll go download one. (And I did, not long ago, to play around with image generation.)
That's mostly incorrect. In Maryland, like in most places in the country, the distribution infrastructure is controlled by regulated monopolies that buy power on the market from generators. Your bills separate out the fees for usage and the fees for distribution, and the Maryland PSC has to approve both.
I use an extension that gives me a customized homepage, but I still always get the "what's new" tab on every major version upgrade.
It's a totally separate tab that opens. It's got nothing to do with what you use as your homepage.
A 486 already had over a million transistors. These are in the thousands.
Well, no shit. First grants went out in 2024, right in time for the administration to change; Lutnick put it on hold almost immediately. It also hasn't cost $42B; "By 2026, about half of the $42.45 billion allocated by Congress during the passage of the IIJA remained unused".
https://en.wikipedia.org/wiki/Broadband_Equity,_Access,_and_...
Incidentally, BEAD includes LEO satellites as part of it. SpaceX got grants from it. (https://texasstandard.org/stories/spacex-demands-changes-fed...)
If you can't get results with the thing I'm getting results with, what other explanation would you give?
I wanted to get LLM feedback while writing without having the LLM suggest/write text for me, so I built https://www.writelucid.cc
I struggle with this too in Edinburgh; I make a point of trying to stay engaged and keep recognizing how amazing the place is from the outside.
Going to the festival (and the book festival, back when that was in Charlotte Square) is improved by leaning into your local status and knowing how to duck in and out. And ideally knowing someone with a lanyard who can get you into the media bar: it's not cooler and more happening in there, it's actually quieter.
There's a vennel route across the city. It's an odd experience going through a deserted and mildly unpleasant alley, stepping out into a shuffling horde of tourists, cutting sideways across their paths, and ducking behind some bins into another quiet path. Like walking from the wings of the stage across it.
Question: for software development, how much of an AI do you need for local development? Can it be run locally? Can someone train something that knows a lot about software but lacks comprehensive coverage of history, politics, and popular culture?
YC has funded over 5000 companies, and this page catalogs 39 that failed, many of which, on the sites own terms, are simply business failures, with no additional drama. I don't think the authors of the site realize the case they're actually making here.
Alternatively, just make it illegal to ship any kind of initial bootloader as part of a CPU's/SoC's mask ROM in any computing device that is marketed as a general-purpose one.
No, you just need to make it illegal to have the bootloader contain hardcoded key material and use it for verifying the code it loads.
I wrote to the EU contact about this, got a patronising reply about how good it is, app being open source and what not.
Clearly tailored to the regular normie without technical skills.
But all commodities are like this. It is actually pretty easy to send some electrons great distances, or heck at least it's a well understood, solved problem. It's just that those interconnections haven't been built yet.
Heck, oil is probably the "default" example of what a commodity is, but we're now all acutely aware of what happens when moving that oil from one place to another becomes exceedingly difficult.
Chill out on my porch, read a book, make a salad? I don't think that's what the post is getting at.
Not a Show HN, just some article in a language only a minority here can read.
Orthography and https://en.wikipedia.org/wiki/Register_(sociolinguistics)
A prime example how German dubbing tried to make films more comedic. Like the Bud Spencer/Terence Hill films that were a huge success in Germany because of the slapstick dialogue not present in the Italian original.
In Dark Star they renamed Alien to Hüpfgemüse, which roughly translates to hopping vegetable. It‘s a hoot!
"the most normal type of thing you can think of"
I can believe that (and your sudafed guess is likely correct), but then why be obscure about it, when you could say 'turns out xyz is illegal in Japan, do not let your well meaning friends/family mail you medicine of any kind'? However, I don't watch the whole video. I know this hyper-edited style is popular nowadays but to me it feels like advertising/bait and I don't want to invest the energy to parse it.
This is a really good thread on why this technology is becoming a problem for "open" anything. The argument "we can create our own separate web" is fine until all of your services are behind the web that locks you into owning a Google approved or Apple approved mobile device.
I had trouble reading the web site ... I'm sure someone can provide a link to an archive or something.
Meanwhile, I used "lynx -dump" and lightly edited the result, so the plain text is here:
To be honest, I liked your original response about returning a 409 - it's not something I'd done before and I like how it keeps things simpler.
But your follow up responses here are making me rethink. Now you have to have all these special cases where the original request is still in process. I think or assertion of "99% are simple POST operations" is bullshit. For the times where idempotency is hard and really matters, often times you're calling a third party API, like a payment processing API.
I would think a better approach would be to always return a 409 on a subsequent request, regardless of whether it passed or failed, and then have a separate standard API that lets you get the result of any request by its idempotency key.
Not everywhere, in Portugal it is the first May's Sunday.
Excellent, I'm glad this is resolved.
From 5 days ago, 1138 comments
As if war mongers, and AI tech bros would care.
After COVID it feels doom is unavoidable.
It's not like you have a choice, the printer doesn't work locally unless you enable LAN mode, and then it only works locally. Bambu make you pick either "closed servers" or "the mobile app doesn't work" for no reason.
I'll chip in to this developer's legal defense fund because I want to be able to do whatever I want with my printer, and "I can't do what I want with my printer" is a bigger problem for me right now than "the developer made a TCP connection on my behalf to a server he didn't own".
There's an election in the chain. Voters elected the national MPs who then selected the national PMs, who then selected von den Leyen. Democratic-ish.
Oh that whole thing is a higher level! I see now, I thought that top section is actually a ground section, thanks.
I don't see this as an issue with the company. They were happy to release their code as OSS, as long as that allowed them to make enough money to develop the software. It was a win/win, and them AWS came and took advantage of that.
If you leave some apples at the side of the road, with a sign "$1 per apple" or whatever, and people largely pay enough for you to continue to pick apples, that's great. If someone starts coming every day and taking the entire crate, I don't blame you for discontinuing the convenient apple sales, I blame the thief.
That just leads to bigger fools. I don't just mean that as clever wordplay, but as a serious point. No matter how sloppy you make your API someone else will use it even more sloppily. Now you've got an enormous sloppy surface you can't properly contain or maintain, and people still transgress its boundaries even so.
The robustness principle has its times and places but the general consensus that it should be applied everywhere to everything was a big mistake. The default should be that you are very rigid and precise and only apply the robustness principle in those times and places it applies, and I'm perfectly comfortable waiting to deploy something precise and find out that this was one of them. The vast majority of APIs is not the time and place for the robustness principle. It's the time and place for careful precision on exactly what is provided, and detailed and description error messages, logging, and metrics for when the boundaries are transgressed.
> it's really weird to say that the forks "resulted" in the license changes when those forks where a response to the license changes
But those license changes were a response to how AWS was monetizing their work in ways unsustainable for the upstream projects.
Not sure if it was the RAM crisis, or the general hardware cost of 1990 hardware, but I got a PC 386SX with 2MB and a 20 MB HDD, with DR-DOS 5 and Windows 3.1, on credit.
I could have gotten OS/2 if willing to pay in Escudos, what would be an extra 500 to 1000 euros in today's money, for the additional hardware requirements.
> the state reserves some of the harshest punishments for counterfeiters
This is empirically untrue [1].
[1] https://en.wikipedia.org/wiki/Counterfeit_money#Penalties_by...
Wow enforcing laws and cultural taboos deters crime?
Right. An operation is idempotent only if doing it twice has the same result as doing it once. If you have to worry about whether an operation has already been done, it's not idempotent. If you have to worry about order of operations, it's not idempotent.
What's being asked for here is eventual consistency. If you make the same request twice, the system must settle into a the same state as if it was done only once. That's the realm of conflict-free replicated data types, which the article is trying to re-invent.
x = 1
is idempotent. x = x + 1
over a link with delay and errors is a problem that requires the heavy machinery of CRDTs.
¥99,000 [$632 USD]
"Limited to 650 units. Sold Out"
https://www.casio.com/jp/basic-calculators/product.S100X-JC1...
> generally if people get angry there is a reason other than "things are changing"
Silicon Valley’s leaders have been one upping themselves on messaging to the public that they’re building a doomsday device. And then, bewilderingly to the outside, all of us who read through that bullshit then appear to merrily go along with the apparent suicide pact.
Most Gen Z, it appears, can also see through the bullshit. But about a third of them taking the message sincerely seems par for the course, and as you said, I wouldn’t assume it’s just aversion to change.
Hell no. Far too many registers, not enough instructions, and (especially with ARM64) weird restrictions that arose from trying to pack things into 32-bit instructions as efficiently as possible.
I've been writing x86 Asm for a few decades. RISCs are simpler in all the wrong ways. After all, "just use a (stupid) compiler" was the whole philosophy.
Something parallel, there is a Black Mirror episode 7.1 (Common People) where he pulls out his own teeth, tongue in a mousetrap, torture/harm his body, etc. to earn money on the Internet.
Edit/Add: I asked Claude to find that episode as I explained part of the storyline and is now asking me to seek help. Early Internet would now, definitely, be totally banned.
Edit2: Is this new, or am I stumbling on something new? I cannot reply to my replier below. I’m sure @stavros hasn’t blocked me. But, yes, we will always call him Roy. That is the only way we remember him.
This is just lazy AI use as a replacement for lazy stock image use. The details of exactly how it sucks at its job while providing something that fills a checkbox for someone who has no concern for quality are somewhat different, but the basic failure is the same.
Which is why now companies can happily reduce head count.
Most people also understand that, because they're not "frequent" users of a thing, they absolutely suck at using it, and set their expectations accordingly. In particular, they realize that doing anything non-trivial with the thing requires them to spend some learning and practice time, or asking/hiring a "frequent" user to do it for them.
So the reasonable response to being told you're holding your scissors wrong is to realize that yes, you most likely are holding your scissors wrong[0], and ask the other person for advice (or just to do the thing), or look up a YouTube video and learn, or sign up to a class, or such.
Expecting mastery in 30 seconds is not a reasonable attitude, but it's unfortunately the lie that software industry tried to sell to people for the past 15 years or so.
--
[0] - There's much more to it than one would think.
Epoxy resins are usually pretty toxic when uncured too.
Writing 4 consecutive pixels at 0x0 stores those in video memory 64KB apart.
I don't think "64KB apart" would make much sense either, especially because of the flexibility of the VGA memory controller described in the article; they end up in 4 separate 64KB planes. Unless you're referring to the linear view of the framebuffer that post-VGA GPUs use, in which case the mapping between the planes and LFB can differ considerably between implementations: https://stackoverflow.com/questions/36269239/meaning-of-byte...
The low execution quality of Meta's metaverse effort surprised me, too.
But they wanted it to run on their relatively weak headgear. A good metaverse needs a decent gamer PC, a serious GPU, and a few hundred megabits per second of Internet bandwidth. (I've written a Second Life client in Rust, so I'm very aware of the system requirements.) Facebook needs to serve a user base which is mostly phones and people with weak PCs. Not Steam users.
If you have to squeeze it onto underpowered hardware, you get something like Decentraland or R2 or Horizon - low rez, very limited detail, small contained areas. Roblox has made some progress on this problem, but it took them two decades, even with a lot of money.
The real problem with metaverses is that a big, realistic virtual world is a technical achievement, but not particularly fun. It's a world in which you can spend time and meet people, but the world is not a game. It has no plot or agenda. This throws many new Second Life users. They find themselves in a virtual world the size of Los Angeles, with thousands of options, and are totally lost. It's not passive entertainment. As Ted Turner (CNN, TBS, etc.) used to say, "the great thing about television is that it's so passive."
It is not in fact the same in the USA. You cannot be held indefinitely without a judicial hearing and without access to a lawyer in the US. You can in Japan, and in fact that's the norm.
There are some real gems in the sea of slop; and as archivists and historians, they shouldn't moderate.
Maybe it's time for France to reconsider its relationship with the EU.
You're replying to the original author of Bun. Given the usage of Bun, and the fact that his company (primarily him, actually) was recently acquired by Anthropic for what I'm guessing was a bajillion dollars, I think he probably already knows his work is significant and that he made something interesting.
This comment sure didn't age well: https://news.ycombinator.com/item?id=48050964
> LLMs came along and erased that assumption. Now you don't know if that e-mail, that 12-page design document, the 100 or 1000 line PR, or those 10 Jira tickets were written by someone who invested a lot of their own time into producing something, or if they had their AI subscription generate something that looked plausible.
Oh, we know. It's pretty clear in many cases.
Nothing Jarred said is an assertion other than "There’ll be a blog post with more details."
I have not had time to look at the code myself, but from when this was initially posted to Reddit, IIRC it had around a thousand global mutable variables, which are unsafe to access.
I am very curious what the numbers are once the test suite passes and after a few passes of reducing the amount of unsafe.
Calif is just killing it these past couple months. Reminder that Calif is Thai Duong's new firm.
While I believe the comment you are replying to may be too broad considering "all tech", I also strongly agree with the overall sentiment (and in particular I commend ost-ing for putting a general feeling I think a lot of people have so clearly and succinctly into words).
As a Gen Xer, I grew up with a strong belief in the "goodness" of technology, of its power to make people's lives better and to ameliorate suffering. So after 25 years of seeing so much invested into technology that actively makes people's lives worse (e.g. ad-tech, social media algorithms), and even conservatively just results in the huge accumulation of wealth and power to the very few, I can't help but feel extremely disillusioned.
Yes, I like showers and soap and running water, but I rarely see the type of economic investment into tech these days that will have as broad of a beneficial impact as running water did.
> I'm not convinced that involuntary incarceration will actually fix the problem.
Not to sound too crass, but doesn't that pretty much "fix the problem" (i.e. homeless people on the street) by definition?
This is a good talk. Really gets into the details of how things differ from the classical SaaS or consumer product.
I've been doing reliability for most of my career, and have always been able to hide behind, "We're not a bank, if we lose a few requests it doesn't matter". They can't do that. :)
One advantage that they have is that the market closes, so they can do maintenance that takes the whole system down, but when you're running a global consumer product, it's a lot harder to do that without pushback.
So for most of us, our stress is around zero downtime maintenance, and theirs is around never dropping a request when the system is live.
Plenty, Microsoft has security teams whose job is to attack Windows.
Naturally they don't do blog posts about what they find.
Obviously there is a huge trend of "rewrite X in Rust". I understand why, Rust is a huge improvement in safety and speed.
My question is, to people even older than me (and I'm certainly not young), does anyone remember this much enthusiasm about people rewriting C code into (C++/Java/Whatever was new and hot)? Because I don't, but maybe I missed it.
As a user I actually like Gatekeeper. 95% of the time it's not a problem. the other 5% of the time I have to click a button in my settings to allow unsigned code. But at least it gives me pause to think about the source and if I really trust it (which is mostly offloaded to Apple the other 95% of the time).
Free business idea: get an Apple developer account and then agree to sign code for other people in exchange for a small piece of their income. I'm surprised that doesn't exist yet (or does it?).
Yes, downloaded files have a specific attribute, and unless you explicitly unblock the file, it will give a warning.
You have to distribute a "bundle" in a particular directory layout.
The first time I used an IBM PC I was so disappointed. On every aspect of the interaction my Apple II would run rings about it. Character IO via the BIOS on CGA was glacial to avoid writing to VRAM and getting snow, and an 8088 at 4.77 MHz was not nearly 4.77 times faster than the 6502 at 1 MHz - in fact, it felt slower.
It’s not that the 8088 was a horrible CPU - it was a pretty ok one - it’s just that the 6502 was a beast of a CPU.
Stepping motors are not good at stationary power consumption. They use power even with no load on them.
Brakes for robot joints are common in industrial robots. They're usually part of the emergency stop system. If power fails, the controller crashes, or someone pushes the emergency stop button, spring-driven brakes lock all major joints to stop all motion.[1] That might be useful in a quadruped, which can park without active balance.
[1] https://www.techbriefs.com/component/content/article/28812-c...
The BBCs style guide amuses me. The princes must be referred to by their titles, as well as Sir David and anyone else with a knighthood.
I understand it's pretty common in the UK, but as an American it's funny to see.
Oh, good. We need more backups.
The one in Egypt doesn't get updated.
> give me an option to actually run it without having to manually go into System Settings each and every time without disabling security features?
People reflexively hit yes to these things.
Much of what was good in Hack just got rolled into PHP.
> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.
As expected, Modula-2 / Objective Pascal like safety was great during the last century, before automatic resource management, and improved type system became common in this century.
Naturally also have to note, wasn't this supposed to be only an experiment, nothing serious?
Back at Caltech, one of the students realized that the only thing limiting the brightness of an LED was heat dissipation. So, he dipped an LED into liquid nitrogen, and cranked up the current. It got pretty bright before it melted.
Naturally, he realized that the clear plastic blob it was inside was an insulator. How to fix - he filed it down to the bare minimum that would hold it together. This time, it would light up a whole room!
Liquid nitrogen is all one needs to make bright LEDs.
They simply scaled until their principles became inconvenient, and then they stopped mentioning them. That's Google and "Don't be Evil".
"Taking an availability hit" is also an "in the limit" case that mostly serves to illustrate the falsity of "disclose or patch" as a binary. Much more commonly: a fully disclosed vulnerability arms systems teams with enough information to mitigate; pull kernel modules, change permissions, that sort of thing.
This seems closely related to the problem of model collapse [1][2][3], where LLMs lose the tails of the distribution, and so when you recursively train on the output of an LLM, or otherwise feed the output back into the input in subsequent stages, you lose the precision and diversity that human authors bring to the work. Eventually everything regresses to the mean and anything that would've made the content unique, useful, and differentiated gets lost.
My takeaway from this is that AI is a temporary phenomena, the end stage of the Internet age. It's going to destroy the Internet as we know it as well as much of the technological knowledge of the developed world, and then we're going to have to start fresh and rebuild everything we know. My takeaway is that I'm trying to use AI to identify and download the remaining sources of facts on the Internet, the human-authored stuff that isn't generated for engagement but comes from the era when people were just putting useful stuff online to share information.
[1] https://en.wikipedia.org/wiki/Model_collapse
[2] https://www.nature.com/articles/s41586-024-07566-y
[3] https://cacm.acm.org/blogcacm/model-collapse-is-already-happ...
I know 16-year-olds, and 15-year-olds, and 14-year-olds, who absolutely know what goes on in a job hunt because they are observant, socially aware, and have watched relatives sending literally hundreds of resumes and get nothing back.
And those kids ... inexperienced, no mortgage, no creditors, no "real world" responsibilities ... absolutely see it.
When someone builds something using the tools at hand and the experience they have, it definitely matters as to how old they are, and how much they've done. That shapes how you give feedback, both in style and content.
I know a lot of bright, intelligent, keen, motivated kids, and in every way I encourage them to go and build things that they think are relevant and important, even if I don't agree. The experience will shape them and make them better.
You can't register a ch domain with fewer than 3 characters. It's showing as available because that thing that checks available only looks if it's registered, not if it's allowed.
For a sec there I thought it said "Roadside Picnic"
You could do worse than spend some time with this fantastic 1972 novel (forward by Ursula K. Le Guin):
https://content.cosmos.art/media/pages/library/roadside-picn...
Bonus: this version has an afterward by author Boris Strugatsky which is quite entertaining.
It was easier a few years back.
Also https://news.ycombinator.com/item?id=48068333, but got little traction.
I'm suspicious of their results with regards to tool usage.
It's unsurprising that round-tripping long content through an LLM results in corruption. Frequent LLM users already know not to do that.
They claim that tool use didn't help, which surprised me... but they also said:
> To test this, we implemented a basic agentic harness (Yao et al., 2022) with file reading, writing, and code execution tools (Appendix M). We note this is not an optimized state-of-the-art agent system; future work could explore more sophisticated harnesses.
And yeah, their basic harness consists of read_file() and write_file() - that's just round-tripping with an extra step!
The modern coding agent harnesses put a LOT of work into the design of their tools for editing files. My favorite current example of that is the Claude edit suite described here: https://platform.claude.com/docs/en/agents-and-tools/tool-us...
The str_replace and insert commands are essential for avoiding round-trip risky edits of the whole file.
They do at least provide a run_python() tool, so it's possible the better models figured out how to run string replacement using that. I'd like to see their system prompt and if it encouraged Python-based manipulation over reading and then writing the file.
Update: found that harness code here https://github.com/microsoft/delegate52/blob/main/model_agen...
The relevant prompt fragment is:
You can approach the task in whatever
way you find most effective:
programmatically or directly
by writing files
As with so many papers like this, the results of the paper reflect more on the design of the harness that the paper's authors used than on the models themselves.I'm confident an experienced AI engineer / prompt engineer / pick your preferred title could get better results on this test by iterating on the harness itself.
Yes, this is what you'd want. It doesn't have to be a complicated as the HTML5 algorithm either. That's complicated because it was a harmonization of at least 3 browser's multi-decade heuristics and untold terabytes of existing HTML practice. An algorithm unconcerned with backwards compatibility could much simpler, but still clearly define error behavior much easier to use than "scream and die".
And it's still unambiguous. You can cringe at what some people do, but it would be strictly a taste issue rather than a technical one, as the parse would still be unambiguous. And if you think you can fix taste issues with technical specification, well, you've already lost anyhow.
>This could be right for the current architecture of LLMs, but you can come up with specialized large language models that can more efficiently use tokens for a specific subset of problems by encoding the information differently.
That's precisely what happens on the bad side of a S curve.
>Templated though, not manually writing it out for every blog post say.
Both. We manually run HTML just fine back in the day.
>and it brings about a global dark age of poverty and inequality by completely eliminating the value of labor vs capital
So, like the past 20 years?
Can't say I hate the HTML 5 spec. It resolves the ambiguities that made previous HTML specs insufficient to make a working web browser.
The standards that make my life miserable at times are the secondary standards like GDPR and WCAG as well as the de facto "standard" systems we are forced to participate in such as Cloudflare, the advertising economy, etc.
It's easy to say "WebUSB is bloat" and I'd certainly say PWA is something that could only come out of the mind that brought us Kubernetes, but lately I've been building biosignals applications and what should my choice be: write fragile GUI applications for the desktop that look like they came out of a lab and crash from memory leaks or spend 1/5 the time to make web applications that look like they belong in the cockpit of a Gundam and "just work"?
And one would argue without actually focusing on Linux the kernel and Linux distro on top for the average user, they're just funding server FOSS for use by fat companies
As a child I saw an acted segment about ball lightning in childrens‘ TV, following a person around the house, and had nightmares for a long time afterwards. The thing is spooky as hell.
Yes so far, but it‘s switching heavily towards Markdown.
I wonder how long does it take to back it up.
Non-paywall link: https://www.nytimes.com/2026/04/25/world/asia/japan-care-wor...