What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
I remember dealing with this BS back in 2017. It was clear to me that containers were, more than anything else, a system for turning 15MB of I/O into 15GB of I/O.
So wow and new shiny though so if you told people that they would just plug their ears with their fingers.
I'm curious what advantages this has over adding durability to an existing language, like DBOS does:
https://github.com/dbos-inc/dbos-demo-apps/blob/main/python/...
You are correct: my bad! Edited per your comment. Thank you.
There's a spicy argument to be made that "Rewrite it in Rust" is actually an environmentalist approach.
And totally pales when compared to operation spiderweb both in precision and the amount of damage done.
Until recently (and probably with some pressure from VW) everything else was supposed to be phased out in Europe within a decade: https://www.spglobal.com/automotive-insights/en/blogs/2025/1...
I can just imagine all the other IPv6 security holes that have gone unnoticed because nobody is using IPv6.
This is fascinating to me. I am allergic to pork (or I would say intolerant, when I eat it I get a headache and/or stomach ache). But I did try a piece of wild boar once, and was fine after that. I will have to look into this!
Thank you! So much more helpful.
And there was a real cliff in recording time, not a marginal difference: a normal VHS tape could record a typical TV show, a normal Betamax tape could not. The utility function is a step function here.
(Both got more recording times through Long Play techniques a.k.a. quality degradation and through actually longer magnetic tape in the cassette, but at least in the beginning it was clear-cut).
The last thing this world needs is my handwriting spreading beyond my local community!
But I would have loved to use this to capture my kid's kindergarten handwriting. Maybe I still have a sample around here...
There are 30,000 different x-platform GUI frameworks and they all share one attribute: (1) they look embarrassingly bad compared to Electron or Native apps and they mostly (2) are terrible to program for.
I feel like I never wasting my time when I learn how to do things with the web platform because it turns out the app I made for desktop and tablet works on my VR headset. Sure if you are going to pay me 2x the market rate and it is a sure thing you might interest me in learning Swift and how to write iOS apps but I am not going to do it for a personal project or even a moneymaking project where I am taking some financial risk no way. The price of learning how to write apps for Android is that I have to also learn how to write apps for iOS and write apps for Windows and write apps for MacOS and decide what's the least-bad widget set for Linux and learn to program for it to.
Every time I do a shoot-out of Electron alternatives Electron wins and it is not even close -- the only real competitor is a plain ordinary web application with or without PWA features.
> The company and the Justice Department reached a surprise settlement on Monday, following a week of testimony during an antitrust trial that threatened to potentially separate the world’s largest live entertainment company.
Someone greased a few palms.
Sure, but then the taxpayer has to pay for it anyway. https://news.tvbs.com.tw/english/2690584
"TAIPEI (TVBS News) — Premier Cho Jung-tai (卓榮泰) announced on Tuesday (Nov. 19 2024) plans to subsidize Taiwan Power Company (台灣電力公司) with NT$100 billion to address rising international fuel costs and stabilize prices"
=> over $3bn USD! This is not a small amount of money.
It might be a good time to move PyCon US to Canada again.
I dunno, previous generation Hakka tires were a revolution that has since been duplicated by other brands. Studless snows are really good, retractable studs can only add so much.
That would be a hypothesis, not a fact.
I'm not closed to it. You can check my comment history for frequent references to next-generation AIs that aren't architected like LLMs. But they're going to have to produce an AI of some sort that is better than the current ones, not hypothesize that it may be possible. We've got about 50 years of hypothesis about how wonderful such techniques may be and, by the new standards of 2026, precious few demonstrations of it.
Quoting from the article:
"Within five years, deep learning had consumed machine learning almost entirely. Not because the methods it displaced had stopped working, but because the money, the talent, and the prestige had moved elsewhere."
That one jumped right out at me because there's a slight-of-hand there. A more correct quote would be "Not because the methods it displaced had stopped working as well as they ever have, ..." Without that phrase, the implication that other techniques were doing just as well as our transformer-based LLMs is slipped in there, but it's manifestly false when brought up to conscious examination. Of course they haven't, unless they're in the form of some probably-beyond-top-secret AI in some government lab somewhere. Decades have been poured into them and they have not produced high-quality AIs.
Anyone who wants to produce that next-gen leap had probably better have some clear eyes about what the competition is.
I think the worst thing about the golden age of symbolic AI was that there was never a systematic approach to reasoning about uncertainty.
The MYCIN system was rather good at medical diagnostics and like other systems of the time had an ad-hoc procedure to deal with uncertainty which is essential in medical diagnosis.
The problem is that is not enough to say "predicate A has a 80% of being true" but rather if you have predicate A and B you have to consider the probability of all four of (AB, (not A) B, A (not B), (not A) (not B)) and if it is N predicates you have to consider joint probabilities over 2^N possible situations and that's a lot.
For any particular situation the values are correlated and you don't really need to consider all those contingencies but a general-purpose reasoning system with logic has to be able to handle the worst case. It seems that deep learning systems take shortcuts that work much of the time but may well hit the wall on how accurate they can be because of that.
Sovereignty, the days of peaceful geopoltics are behind us.
Somehow that’s an often missed aspect of this. Yeah, ditching coal has a wide array of nice side effects. It has killed many, many more than the world’s nuclear accidents.
That's just nuts. That's more processing power than the first 10 computer I owned combined.
Still on time. It is almost two months now and this is such a deep subject and there are so many little tricky bits that I wonder if I will be able to complete the thing but there is still (slow) progress. I never suspected the amount of hard work that goes into building something that is stable at the nano second scale. But I'm becoming more appreciative every day ;)
And courts keep wondering why commoners lose respect for the law. I know a judge and had a couple of really interesting conversations with him. We agreed on lots of things but there was one item that stood out for me that made a massive difference in interpretation: to him the map was the territory, he saw the law as the thing that made the world, not the other way around. I always found that to be extremely interesting in that it explains why some of those decisions come across so completely tone deaf. On paper it may all look like it makes sense but in the real world it leads to bonkers effects.
I think this is true.
The closest analogy for AI, IMO, is not intellisense or auto-complete. It is cloud (there's a case to be made for compilers too; I'll leave that as an exercise for the reader).
Cloud, like AI:
* transformed hard things that took a lot of time and expertise into simpler things (do you remember setting up database replication?)
* came with a lot of hype but ultimately provided a lot of value (do you remember grid computing?)
* had plenty of skeptics (see this reddit thread which has some examples[0])
* was adopted at various speeds depending on the person and company
* caused security concerns[1]
In the end, people found a place for cloud. It is still growing, but not everything will run on cloud.
The same is true of AI. People will find a place for it. It won't do everything. But just as many sysadmins were forced to adapt to cloud, many developers will be forced to adapt to AI.
0: https://www.reddit.com/r/aws/comments/59ty7u/which_companies...
1: https://www.sciencedirect.com/science/article/abs/pii/S10848...
The aesthetic is so incredibly 1998. Reminds me not just of SimCity 2000 but the lesser played "A-Train", with its gentle day-night cycle.
This is s privately owned company
If the AI people win, there will be no human audience for your writing. I don't think people will write simply to benefit someone else's subscription-funded service.
"the vast majority of our media and culture has been created by media and culture specialists" - well yes obviously, just as our technology has been created by technology specialists.
Not so much composability on async Rust.
Of course I think this, otherwise I would not have written it, I also have seen people lose their bycicles and now wondering what they will do next to keep their mind sane.
>The best-sounding note combinations (to Western people) are the ones derived from the first few harmonics. In other words, you get the nicest harmony (for Western people) when you multiply and divide your frequencies by ratios of the smallest prime numbers: 2, 3, and 5.
He keeps writing "for western people" but some parts of these are inherent in the human ear evolution and rather universal. All around the world we can find pentatonic music for example, even from ancient peoples, and this includes e.g. West African cultures, China, etc. And traditions that have microtonal inflections will still place the same emphasis on the octave, the fifth, major/minor third, etc the microtones add different flavors but it's not some widely different thing, which is why e.g. middle eastern or Indian songs e.g. can still be played on pianos, simplified (to the nearest approximation) but still retaining a lot, just losing their full flavor.
I'd agree, I've been building a personal assistant (https://github.com/skorokithakis/stavrobot) and I'm amazed that, for the first time ever, LLMs manage to build reliably, with much fewer bugs than I'd expect from a human, and without the repo devolving to unmaintainability after a few cycles.
It's really amazing, we've crossed a threshold, and I don't know what that means for our jobs.
Inside the browser hardly matters if it isn't maintained, Google has done almost nothing to their DWARF debugging tooling since it was introduced as beta a few years ago.
Apple clearly no longer cares about workstation market, Mac Pro has joined OS X Server.
I was betting on the 1TB Mac Studio, but half a terabyte was already an insane amount of memory.
Developing the AArch64 code generator for the DMD D compiler.
One thing we did at reddit for a while was put posts from new people in "jail". They would show up in a special yellow box at the top of the home page to accounts that tended to be early upvoters of things that became successful later (our Nostradamusus so to speak), and then if it got enough upvotes from that group it got out of jail and placed on the regular /new page.
So maybe some sort of filter like that? Only show it to those kinds of accounts at first?
The downside is that if that group isn't big enough you get a lot of groupthink, but if your sample is wide enough, it can be avoided. To be honest, I don't recall why we stopped doing it.
The model is not the system. The model is a component of the system. The "cron job" (or other means by which a continuous action loop is implemented) and the necessary prompting for it to gather input (including subsequent user input or other external data) and to pursue a set of objectives which evolves based on input are all also parts of the system.
I’ll admit I leaned way too hard into the "movie hacker" aesthetic for the UI
Nothing wrong with that. Beats a boring corporate dashboard any day. Video game and similar interfaces work for a reason.
These agents are all calling APIs that are well beyond your control. How does it matter whether a thin CLI wrapper is running on your computer or not?
Getting bombed tends to make people care a lot, just not for the people doing the bombing.
Found while comparing options to catalog magnet torrents and search the BitTorrent Distributed Hash Table (DHT). Similar to https://github.com/sergiotapia/magnetissimo | https://news.ycombinator.com/item?id=13505226
Photoshop does the same with the .raw extension.
Satellite killers are cheap and effective for nation states, if anyone ever lifts to orbit.
Not the first time something Apple did was slightly phallic --- OS X El Capitan (now 10 years ago!) had a slightly unfortunate slogan:
https://techcrunch.com/2016/02/25/more-to-love-with-every-di...
As an aside, your link leads to an image/webp served with a .jpeg extension. WTF?
I made my own AI personal assistant:
https://github.com/skorokithakis/stavrobot
It's like OpenClaw but actually secure, without access to secrets, with scoped plugin permissions, isolation, etc. I love it, it's been extremely helpful, and pairs really well with a little hardware voice note device I made:
“Be greedy when others are fearful.”
I assume it's mostly the same thing that keeps non-Catholics from taking communion at Catholic mass.
https://www.familysearch.org/ is a free resource for those needing to collect evidence to prove descent for countries that offer a path to citizenship via descent.
The challenge I'm finding with sandboxes like this is evaluating them in comparison to each other.
This looks like a competent wrapper around sandbox-exec. I've seen a whole lot of similar wrappers emerging over the past few months.
What I really need is help figuring out which ones are trustworthy.
I think this needs to take the form of documentation combined with clearly explained and readable automated tests.
Most sandboxes - including sandbox-exec itself - are massively under-documented.
I am going to trust them I need both detailed documentation and proof that they work as advertised.
Recessions without an external shock are rare. But you could say the rest of the economy is in stagnation. I‘m expecting real recession as soon as the LLM bubble begins to burst.
Everybody says "but they just predict tokens" as if that's not just "I hope you won't think too much about this" sleight of hand.
Why does predicting the next token mean that they aren't AGI? Please clarify the exact logical steps there, because I make a similar argument that human brains are merely electrical signals propagating, and not real intelligence, but I never really seem to convince people.
>So, this all then implies that 'intelligence' is then a commodity too? Like, I'm trying to drive at that your's, mine, all of our 'intelligence' is now no longer a trait that I hold, but a thing to be used, at least as far as the economy is concerned.
This is obviously already the case with the intelligence level required to produce blog posts and article slop, generade coding agent quality code, do mid-level translations, and things like that...
> what is a “good” economy vs. a “bad” economy?
One that is growing sustainably with broad benefits.
American here: will there be political consequences for the now-slapped-down decision makers?
Source for Khamenei holding assets in Europe? (Genuine ask.)
Some more background and a user report of the migration process: https://gregpak.net/2025/11/13/how-and-why-i-moved-from-a-bl...
Crucially, the purpose of Blacksky is to provide a service for the (US) black community which has its own moderation decisions while being substantially interoperable.
(Remember, the reasons people use one social network rather than another are almost always social first and technical second, where the social functions are enabled or hindered by the technology)
Very agile, have to start somewhere. Keep learning and cranking on the controls to tighten them up.
Wouldn't help if the user closes the drawer and moves it to the left - which users used to having apps to the left (and to a left to right reading culture) would.
The main problem I see is the mouse pointer has reason to be on the proximity of the left side: the Apple menu is there, the most important dock elements (on a bottom dock) are there - or the whole dock for a left-placed dock), the menubar File, Edit, etc, and the most common icons for icon bar operations are there.
And if 'content is king', navigating the content (TOC, search, thumbnail) should also be king, not just the current viewing page.
Yeah, I think what is needed is somewhere between docstrings+strategic comments, and literate programming.
Basically, it's incredibly helpful to document the higher-level structure of the code, almost like extensive docstrings at the file level and subdirectory level and project level.
The problem is that major architectural concepts and decisions are often cross-cutting across files and directories, so those aren't always the right places. And there's also the question of what properly belongs in code files, vs. what belongs in design documents, and how to ensure they are kept in sync.
Not too surprising given that a font maps bytes into glyphs, and an instruction set maps them to instructions. I suspect a 6502 or 8051 version would be much simpler.
> I find it's worse here now than X.
I disagree, but in any case the easy solution in that case is to use X instead of HN.
> At least on X reply bots are not allowed anymore.
In theory, maybe.
Right. Field sequential display means heavy flicker. Do not want.
For now there is already a pretty effective mechanism in place, downvote and/or flag those comments that you think are across the line in that sense.
But in principle I agree with you, the rule for me is 'if it wasn't worth your time to write then it certainly isn't worth 1000x times other people's time to read'.
> is not appalled that Iran would use any defenses against a preemptive invasion by countries it considers its sworn enemies, let alone all of those at its disposal?
Those of us who would prefer not to see schools get bombed.
Like, you have a point. Iran hasn't been playing by international law since basically its founding. America flirted with the idea of blowing off international law before fully committing to the bit in 2025, joining Russia, China and Israel. So we have a theatre where those limits don't apply.
That doesn't mean we shouldn't argue they should, or ask if Iran would have been better off playing by the rules.
Yes, sorry. Correct link: https://www.twz.com/news-features/the-misconception-that-air.... Reloaded.
“shownew” : “no|yes” option would be nice.
What is the cost to protect some? To protect all? Much more than $20k-$50 per drone.
I got really excited that I would be able to write in Markdown.
Unfortunately, from the article:
> Markdown import and export features.
I love Boskoop, and they are thankfully still all over German supermarkets. If not, Holstein Cox will do, and if they have it, Elstar.
The real good ones, like Berlepsch, are hard to find here, though, unless you travel to a plantation.
Woah, so cool when a topic I was going into in depth gets to HN.
I'm a relatively new adult beginner on the violin, and one of the fascinating (and extremely difficult) things about un-fretted string instruments is the player has the freedom to shift the tuning around to fit the context. On the violin, we normally play melodies and scales using Pythagorean tuning (which is actually a misnomer as Pythagoras didn't invent it, the ancient Mesopotamians did), which is based on the circle of fifths and leads to wider whole steps and narrower half steps than equal temperment tuning. But then for double stops (i.e. chords), and especially when playing in a string quartet, just intonation, which is based on the harmonic series, is used so the notes sound concordant. This page describes all the different tuning systems a violinist may use, also including 12 TET when trying to match a piano: https://www.violinmasterclass.com/posts/152.
This video shows how challenging it can be when trying to adjust intonation when playing in a string quartet: https://youtu.be/Q7yMAAGeAS4 . Interestingly, the very beginning of that video talks about what TFA discussed that when you tune all your strings as perfect fifths your major thirds will be out of tune.
I'll also put in a plug for light note, an online music theory training tool that was mentioned on HN a decade ago: https://news.ycombinator.com/item?id=12792063 . I'm not related to the owner in any way, I just bought access a few years ago and think it was the first time I really understood Western music theory. The problem with music theory is that the notation is pretty fucked up because it includes all this historical baggage, and lots of music theory courses start with what we've got today and work backwards, while I think it's a lot easier to start with first principles about frequency ratios and go from there.
Other notes (pun intended!): The violin is great for learning music theory because you can actually see on the string how much you're subdividing it - go one third of the way, that's a perfect fifth, go halfway, that's an octave, etc. Harmonics (where you lightly touch a string) are also used all the time in violin repertoire. Finally, the article mentions Harry Patch, but you should also check out Ben Johnston, a composer who worked with Patch and was famous for using just intonation. Here is is Amazing Grace string quartet, and you can really hear the difference using just intonation: https://youtu.be/VJ8Bg9m5l50
This piece was more interesting than I went in expecting it to be. But it hinges I think on a disputable claim, that the GPL's value is based on a network effect, so that the GPL gets dramatically less useful (and thus attractive to newer projects) as the base of mainstream GPL software shrinks. I'm not sure I understand why that would be the case.
The AI rewrite of the Linux kernel also seems farfetched. I don't think it really belongs in the title of this post.
Country IQ rankings are not a real thing.
> But we are saving the lives of ~3 million people so who’s to say what is bad
Korea’s GDP per capita in 1950 was similar to that of Bangladesh around the same time: https://www.nationmaster.com/country-info/stats/Economy/GDP-.... In the alternate timeline where there isn’t a capitalist south korea, the Korean peninsula has 100 million+ people living in poverty and squalor, like Bangladesh today. The cost of that is tens of millions of lost lives resulting from higher infant and child mortality rates.
Because it should be a given and super boring.
I remember these Macbooks did tend to break apart at the corners of the palmrests.
But I like the idea of re-visiting Macbook plastic chassis w/ new inside.
I would love to know what the weight is in the end.
Can the old Macbook chassis lead to a lighter weight computer than the current 1.23kg Macbook neo and Macbook air?
Yeah, I can't imagine being a small team building a SaaS and not having 'deploy-on-merge' set up within the first few weeks.
Agreed, all this "but if you don't need certain skills any more, you'll lose them!" is tiring, and even more tiring because it's missing the entire point: yes, because I don't need them any more!
It feels like I'm reading an article crying "if you buy a car, you will lose your horse-shoeing skills!" every day lately.
This is a patronizing non-answer. If you don't see why, read my comment again and again until you do.
I mean yeah, it's Grok. They had to work really hard to get their preferred levels of political bias in there.
Both programs have been announced as granting six months, but neither of them have explicitly said that there won't be options to renew for another six months.
I expect they haven't decided that themselves yet and don't want to commit publicly until they've seen how well the program goes.
When you say Sonnet 4, do you mean literally 4, or 4.6?
Everyone here says "if developers are so much faster, why aren't we seeing more features?!" as if the only thing required to release a feature is developers.
My CEO keeps asking me "how can we go faster with AI", and my answer is "we can't, because even if we had developers that would instantly develop any feature perfectly, we'd still be bottlenecked on how slow we are at deciding what to actually release".
The catch is: The servers are not managed. SSH instead of mouse clicks.
...and yoghurt is not an euphemism in this case, as much as the mentions of loneliness and Japan would make it seem like that.
This may be too much advanced type theory for a useful language.
You can go all the way to formal verification. This is not enough for that. Or you can stop at the point all memory error holes have been plugged. That's more useful.
You can go way overboard with templates/macros/traits/generics. Remember C++ and Boost. I understand that Boost is now deprecated.
I should work some more on my solution to the back-reference problem in Rust. The general idea is that Rc/Weak/upgrade/downgrade provide enough expressive power for back references, but the ergonomics are awful. That could be fixed, and some of the checking moved to compile time for the single owner/multiple users case.
Lots of words to be weirdly wrong about things.
There certainly were a lot of minor "elite" wars in Europe where power shifted back and forth across the aristocracy with no real difference to daily life. WW2 was not one of them. Nor, historically, was the Thirty Years War.
OP may be confused by the American colonial wars since Korea. Korea is the last one where you can see the difference in outcomes for the population.
Because Ted Gioia is a musician, not a programmer.
Defending against authoritarianism is never pointless.