What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
The low execution quality of Meta's metaverse effort surprised me, too.
But they wanted it to run on their relatively weak headgear. A good metaverse needs a decent gamer PC, a serious GPU, and a few hundred megabits per second of Internet bandwidth. (I've written a Second Life client in Rust, so I'm very aware of the system requirements.) Facebook needs to serve a user base which is mostly phones and people with weak PCs. Not Steam users.
If you have to squeeze it onto underpowered hardware, you get something like Decentraland or R2 or Horizon - low rez, very limited detail, small contained areas. Roblox has made some progress on this problem, but it took them two decades, even with a lot of money.
The real problem with metaverses is that a big, realistic virtual world is a technical achievement, but not particularly fun. It's a world in which you can spend time and meet people, but the world is not a game. It has no plot or agenda. This throws many new Second Life users. They find themselves in a virtual world the size of Los Angeles, with thousands of options, and are totally lost. It's not passive entertainment. As Ted Turner (CNN, TBS, etc.) used to say, "the great thing about television is that it's so passive."
It is not in fact the same in the USA. You cannot be held indefinitely without a judicial hearing and without access to a lawyer in the US. You can in Japan, and in fact that's the norm.
There are some real gems in the sea of slop; and as archivists and historians, they shouldn't moderate.
Maybe it's time for France to reconsider its relationship with the EU.
You're replying to the original author of Bun. Given the usage of Bun, and the fact that his company (primarily him, actually) was recently acquired by Anthropic for what I'm guessing was a bajillion dollars, I think he probably already knows his work is significant and that he made something interesting.
This comment sure didn't age well: https://news.ycombinator.com/item?id=48050964
> LLMs came along and erased that assumption. Now you don't know if that e-mail, that 12-page design document, the 100 or 1000 line PR, or those 10 Jira tickets were written by someone who invested a lot of their own time into producing something, or if they had their AI subscription generate something that looked plausible.
Oh, we know. It's pretty clear in many cases.
I work by myself an feel great joy. Today I talked to the AI about a feature I want to add to this week's project (https://www.writelucid.cc) and it had some good feedback. Later I refactored a big part of the code to simplify it (though I had to explain to Claude why this was possible), and it came out great.
I've never been happier, I can now build everything I've been wanting to build, really fast, with very few bugs.
Nothing Jarred said is an assertion other than "There’ll be a blog post with more details."
I have not had time to look at the code myself, but from when this was initially posted to Reddit, IIRC it had around a thousand global mutable variables, which are unsafe to access.
I am very curious what the numbers are once the test suite passes and after a few passes of reducing the amount of unsafe.
Calif is just killing it these past couple months. Reminder that Calif is Thai Duong's new firm.
While I believe the comment you are replying to may be too broad considering "all tech", I also strongly agree with the overall sentiment (and in particular I commend ost-ing for putting a general feeling I think a lot of people have so clearly and succinctly into words).
As a Gen Xer, I grew up with a strong belief in the "goodness" of technology, of its power to make people's lives better and to ameliorate suffering. So after 25 years of seeing so much invested into technology that actively makes people's lives worse (e.g. ad-tech, social media algorithms), and even conservatively just results in the huge accumulation of wealth and power to the very few, I can't help but feel extremely disillusioned.
Yes, I like showers and soap and running water, but I rarely see the type of economic investment into tech these days that will have as broad of a beneficial impact as running water did.
> I'm not convinced that involuntary incarceration will actually fix the problem.
Not to sound too crass, but doesn't that pretty much "fix the problem" (i.e. homeless people on the street) by definition?
This is a good talk. Really gets into the details of how things differ from the classical SaaS or consumer product.
I've been doing reliability for most of my career, and have always been able to hide behind, "We're not a bank, if we lose a few requests it doesn't matter". They can't do that. :)
One advantage that they have is that the market closes, so they can do maintenance that takes the whole system down, but when you're running a global consumer product, it's a lot harder to do that without pushback.
So for most of us, our stress is around zero downtime maintenance, and theirs is around never dropping a request when the system is live.
Plenty, Microsoft has security teams whose job is to attack Windows.
Naturally they don't do blog posts about what they find.
Obviously there is a huge trend of "rewrite X in Rust". I understand why, Rust is a huge improvement in safety and speed.
My question is, to people even older than me (and I'm certainly not young), does anyone remember this much enthusiasm about people rewriting C code into (C++/Java/Whatever was new and hot)? Because I don't, but maybe I missed it.
As a user I actually like Gatekeeper. 95% of the time it's not a problem. the other 5% of the time I have to click a button in my settings to allow unsigned code. But at least it gives me pause to think about the source and if I really trust it (which is mostly offloaded to Apple the other 95% of the time).
Free business idea: get an Apple developer account and then agree to sign code for other people in exchange for a small piece of their income. I'm surprised that doesn't exist yet (or does it?).
Yes, downloaded files have a specific attribute, and unless you explicitly unblock the file, it will give a warning.
You have to distribute a "bundle" in a particular directory layout.
The first time I used an IBM PC I was so disappointed. On every aspect of the interaction my Apple II would run rings about it. Character IO via the BIOS on CGA was glacial to avoid writing to VRAM and getting snow, and an 8088 at 4.77 MHz was not nearly 4.77 times faster than the 6502 at 1 MHz - in fact, it felt slower.
It’s not that the 8088 was a horrible CPU - it was a pretty ok one - it’s just that the 6502 was a beast of a CPU.
Stepping motors are not good at stationary power consumption. They use power even with no load on them.
Brakes for robot joints are common in industrial robots. They're usually part of the emergency stop system. If power fails, the controller crashes, or someone pushes the emergency stop button, spring-driven brakes lock all major joints to stop all motion.[1] That might be useful in a quadruped, which can park without active balance.
[1] https://www.techbriefs.com/component/content/article/28812-c...
The BBCs style guide amuses me. The princes must be referred to by their titles, as well as Sir David and anyone else with a knighthood.
I understand it's pretty common in the UK, but as an American it's funny to see.
Oh, good. We need more backups.
The one in Egypt doesn't get updated.
> give me an option to actually run it without having to manually go into System Settings each and every time without disabling security features?
People reflexively hit yes to these things.
Much of what was good in Hack just got rolled into PHP.
Back at Caltech, one of the students realized that the only thing limiting the brightness of an LED was heat dissipation. So, he dipped an LED into liquid nitrogen, and cranked up the current. It got pretty bright before it melted.
Naturally, he realized that the clear plastic blob it was inside was an insulator. How to fix - he filed it down to the bare minimum that would hold it together. This time, it would light up a whole room!
Liquid nitrogen is all one needs to make bright LEDs.
They simply scaled until their principles became inconvenient, and then they stopped mentioning them. That's Google and "Don't be Evil".
This seems closely related to the problem of model collapse [1][2][3], where LLMs lose the tails of the distribution, and so when you recursively train on the output of an LLM, or otherwise feed the output back into the input in subsequent stages, you lose the precision and diversity that human authors bring to the work. Eventually everything regresses to the mean and anything that would've made the content unique, useful, and differentiated gets lost.
My takeaway from this is that AI is a temporary phenomena, the end stage of the Internet age. It's going to destroy the Internet as we know it as well as much of the technological knowledge of the developed world, and then we're going to have to start fresh and rebuild everything we know. My takeaway is that I'm trying to use AI to identify and download the remaining sources of facts on the Internet, the human-authored stuff that isn't generated for engagement but comes from the era when people were just putting useful stuff online to share information.
[1] https://en.wikipedia.org/wiki/Model_collapse
[2] https://www.nature.com/articles/s41586-024-07566-y
[3] https://cacm.acm.org/blogcacm/model-collapse-is-already-happ...
I know 16-year-olds, and 15-year-olds, and 14-year-olds, who absolutely know what goes on in a job hunt because they are observant, socially aware, and have watched relatives sending literally hundreds of resumes and get nothing back.
And those kids ... inexperienced, no mortgage, no creditors, no "real world" responsibilities ... absolutely see it.
When someone builds something using the tools at hand and the experience they have, it definitely matters as to how old they are, and how much they've done. That shapes how you give feedback, both in style and content.
I know a lot of bright, intelligent, keen, motivated kids, and in every way I encourage them to go and build things that they think are relevant and important, even if I don't agree. The experience will shape them and make them better.
You can't register a ch domain with fewer than 3 characters. It's showing as available because that thing that checks available only looks if it's registered, not if it's allowed.
For a sec there I thought it said "Roadside Picnic"
You could do worse than spend some time with this fantastic 1972 novel (forward by Ursula K. Le Guin):
https://content.cosmos.art/media/pages/library/roadside-picn...
Bonus: this version has an afterward by author Boris Strugatsky which is quite entertaining.
If you preallocate and O_DIRECT, haven't you basically soaked up most of the benefit of skipping the filesystem?
It was easier a few years back.
Also https://news.ycombinator.com/item?id=48068333, but got little traction.
What do you mean by opposing sides of midnight? They'd be born on the same day then, if one is born just before midnight of the 2nd and the one (later) just after midnight of the 1st.
I'm suspicious of their results with regards to tool usage.
It's unsurprising that round-tripping long content through an LLM results in corruption. Frequent LLM users already know not to do that.
They claim that tool use didn't help, which surprised me... but they also said:
> To test this, we implemented a basic agentic harness (Yao et al., 2022) with file reading, writing, and code execution tools (Appendix M). We note this is not an optimized state-of-the-art agent system; future work could explore more sophisticated harnesses.
And yeah, their basic harness consists of read_file() and write_file() - that's just round-tripping with an extra step!
The modern coding agent harnesses put a LOT of work into the design of their tools for editing files. My favorite current example of that is the Claude edit suite described here: https://platform.claude.com/docs/en/agents-and-tools/tool-us...
The str_replace and insert commands are essential for avoiding round-trip risky edits of the whole file.
They do at least provide a run_python() tool, so it's possible the better models figured out how to run string replacement using that. I'd like to see their system prompt and if it encouraged Python-based manipulation over reading and then writing the file.
Update: found that harness code here https://github.com/microsoft/delegate52/blob/main/model_agen...
The relevant prompt fragment is:
You can approach the task in whatever
way you find most effective:
programmatically or directly
by writing files
As with so many papers like this, the results of the paper reflect more on the design of the harness that the paper's authors used than on the models themselves.I'm confident an experienced AI engineer / prompt engineer / pick your preferred title could get better results on this test by iterating on the harness itself.
What's the fallacy called where you oppose something based on the fact that it has impact on something, not realising that the alternative is is even worse?
I see people talk about how ugly solar panels make mountainsides, but when I ask "would you prefer a coal factory there instead?" nobody would.
Yes, this is what you'd want. It doesn't have to be a complicated as the HTML5 algorithm either. That's complicated because it was a harmonization of at least 3 browser's multi-decade heuristics and untold terabytes of existing HTML practice. An algorithm unconcerned with backwards compatibility could much simpler, but still clearly define error behavior much easier to use than "scream and die".
And it's still unambiguous. You can cringe at what some people do, but it would be strictly a taste issue rather than a technical one, as the parse would still be unambiguous. And if you think you can fix taste issues with technical specification, well, you've already lost anyhow.
I think the GP has an issue not with the specification part, but with the part where it's forbidden for clients to render a noncompliant page.
I agree, and those are still too focused on code generation for specific languages are fighting the last war.
It is the revenge of UML modeling.
Eventually it will get good enough that what comes out of agent work, is a matter of formal specification.
Assuming that code is actually needed and cannot be achieved as pure agent orchestration workflows.
Spot on, incredible how OOP hate can mess up a framework.
Vercel, the only thing they have going for the app model mess, are the partnerships with SaaS vendors that make them the must go tooling.
However this will eventually come to an end.
>This could be right for the current architecture of LLMs, but you can come up with specialized large language models that can more efficiently use tokens for a specific subset of problems by encoding the information differently.
That's precisely what happens on the bad side of a S curve.
>Templated though, not manually writing it out for every blog post say.
Both. We manually run HTML just fine back in the day.
>and it brings about a global dark age of poverty and inequality by completely eliminating the value of labor vs capital
So, like the past 20 years?
Can't say I hate the HTML 5 spec. It resolves the ambiguities that made previous HTML specs insufficient to make a working web browser.
The standards that make my life miserable at times are the secondary standards like GDPR and WCAG as well as the de facto "standard" systems we are forced to participate in such as Cloudflare, the advertising economy, etc.
It's easy to say "WebUSB is bloat" and I'd certainly say PWA is something that could only come out of the mind that brought us Kubernetes, but lately I've been building biosignals applications and what should my choice be: write fragile GUI applications for the desktop that look like they came out of a lab and crash from memory leaks or spend 1/5 the time to make web applications that look like they belong in the cockpit of a Gundam and "just work"?
Those two claims are independent. Centralized FOSS software cannot do this, since you can audit the source, compile it, and use it that way.
Open source is not a requirement for security, sure, but it's much easier to secure OSS.
And one would argue without actually focusing on Linux the kernel and Linux distro on top for the average user, they're just funding server FOSS for use by fat companies
As a child I saw an acted segment about ball lightning in childrens‘ TV, following a person around the house, and had nightmares for a long time afterwards. The thing is spooky as hell.
Yes so far, but it‘s switching heavily towards Markdown.
You don't attack the bully first; you retaliate.
Because that is usually the extreme haters take.
I wonder how long does it take to back it up.
It's a "solved problem" that didn't ever need solving in the first place.
>But a lot of people disagree with you and think it isn't turning to shit, and in fact for most people on the planet, life gets better every year.
That's so untrue in these here parts, it's laughable
>And do you need a full-on enterprise-grade server?
I might not, but this subthread was about them, wasn't it?
>and then aggressively unfollow/block anything you don't want to see
It keeps showing shit I didn't ask for and don't follow anyway
"Public companies" with incentives shaped that way are a driver of bad outcomes
> Actually renaming it was too long and complicated a process,
Specifically, actually renaming it requires an Act of Congress, since it is specified in law.
I am building a better interface for managing KNX systems than the ETS6 software. Code is here: https://github.com/jgrahamc/koolenex
1. I would not have attempted this without AI assistance because it's a big project.
2. I have built a functional program that I am able to use for real work in a handful of weeks, working part time on this (like literally a few hours per day prompting Claude and Kimi).
3. Had I decided to do this without AI assistance it would have been months of work.
It is enabled by default on Android, and only developers can change it temporarly via an ADB session.
I have a question that's been going through my mind -
Why is age verification connected with identity verification?
I understand why the former is not possible with the latter, but my question is -
Whichever entity is responsible for the verification can just pass on the age verification confirmation without passing through any of the other details, right?
Am I mistaken here? Because if this was possible, I could still go ahead with using the VPN.
(2024) really. Saw this years ago.
Xerography was once a very touchy process. It took insane complexity and skilled technicians to make xerography go in the selenium drum era. The basic idea of xerography is that you charge up a photosensitive surface which is discharged by light. So the original is projected onto a selenium drum, and the light areas are discharged. The remaining charged areas will attract toner, which is then rolled onto paper. The final step is heat-fusing the toner to the paper. Then the drum is cleaned and recharged for the next page.
It's amazing that this works at all. It barely did in the selenium drum era, which is what this article is about. Selenium is very soft, and not a very good photoconductor. Around 1990, selenium was replaced with multi-layer organic photoconductors, and the process became much more robust. That resulted in smaller, cheaper, and less maintenance-intensive xerographic copiers and printers. Now, anybody can change the toner cartridge.
That's why Xerox needed so much technician know-how. The selenium drum machines needed a lot of tweaking. Sort of like 1950s cars or TVs, which needed lots of screwdriver adjustments. As the technology matured, it became idiot-resistant, or at least field-replaceable without adjustments.
This is how technology progresses, from fussy to robust. At the fussy stage, you need people who really understand how it works and how it breaks. At the robust stage, you have minimally trained part changers. They're viewed as under-qualified by the old guard, and as cheaper by management. Watch for this pattern as new technologies are adopted. It's happening to programming right now.
I don't understand the distinction you are making.
Obviously they are based on current knowledge. Nobody has any actual crystal ball.
But the outcomes are with regard to future events. So the correct term is predictions.
And they don't "just summarize the current knowledge". The whole point is that they better reflect the knowledge of people who presumably know better because they are willing to put their money where their mouth is, and ignore the vast majority of nonsense. That's not summarization. That's judgment. That's the whole point.
Still a bit early but I'm working on kiwi, a k-dialect that can lower to Apple MLX.
Currently supports CPU and GPU on macOS and CPU on linux.
https://github.com/kiwi-array-lang/kiwi
Kiwi runs computations on small dense arrays in its own runtime, when they are larger it will lower to MLX CPU and eventually to MLX GPU when it is worth it.
As user you don't have to change any code, you just write k.
I'm sure there are other languages designed to take advantage of modern GPUs.
But even with SIMD you can get quite far with array oriented code and many array language implementations will make use of it (BQN, ngn/growler/k, goal, ktye k has a version with SIMD support, …)
> Siri fell behind due to how good Apple’s privacy is.
That makes zero sense.
The problem with Siri is... Siri. The interface itself.
Zero of my complaints around Siri have to do with it not being able to access my private data.
They're entirely about it not understanding my request in the first place or lacking a basic capability.
Corrected: a certain type of very loud and very online person in your audience hates AI art and thinks less of you for using it.
But that doesn't matter, because the game theory they outlined is directionally right. The cohort of people who hate AI art is relatively small. But the cohort of people who love it is even smaller. People can generally spot it, and most people are indifferent to it.
Having said that: I think it's also true that people are generally indifferent to any of the "casual" art in online writing and publications. It's overused and a crutch.
A hero image at the top of a post: good, can be great, do it, make sure it's not AI. But like, a random dinosaur giving a thumbs up in the middle of the post? Don't do that at all.
Most of the other regions are fairly stable. Ohio (us-east-2) is a great choice if you're just starting out. Not sure about ca-central-1, but I've never heard anything bad about it.
I have a lot of experience in this area (and some patent applications). For Alexa, the device established a connection back to the server and then kept that open, sending basically HTTP2/SPDY/Something like it over the wire after it detected the wake word. This allowed the STT start processing before you finish talking, so there is only a small delay in processing the last few chunks of your utterance.
The answer came back over the same connection.
In the case of OpenAI, they can't exactly keep a persistent connection open like Alexa does, but they can use HTTP2 from the phone and both iOS and Android will pretty much take care of that connection magically.
The author is absolutely right, a real time protocol isn't necessary. It's more important to get all the data. The user won't even notice a delay until you get over 500ms. Especially in the age of mobile phones, where most people are used to their real time human to human communications to have a delay.
(If you work at OpenAI or Anthropic, give me a shout, I'm happy to get into more details with you)
Hubris, an embedded RTOS-like used in production by Oxide, has ~4% unsafe code in the kernel last I checked. There’s a ring buffer implementation that has one unsafe, for unchecked indexing: https://github.com/oxidecomputer/hubris/blob/master/lib/ring... (this of course does not mean that it is the one ring buffer to rule them all, but it’s to demonstrate that yes, it is at least possible to have one with minimum unsafe.)
It’s always a way lower number than folks assume. Even in spaces that have higher than average usage.
> only 1 transmitter at a time per channel - across all WLANs, yours and your neighbours, with no deterministic way to avoid collisions.
That’s not correct. You and your neighbor can use the same channel at the same time. On your network, the transmissions of the other network appear will appear as noise. As long as the other devices are far enough away, however, your devices will still be able to make out their own signal.
If the masses can somehow point the absolute loose-cannon that is the current President at Google, things might actually change.
This feels like weird framing. They still need energy to produce it.
I have a genetically engineered luminescent petunia plant. It’s neat, but a ways off from being useful for anything.
I attended Caltech in the 70s when it had an honor system. An anecdote on how it worked:
A fellow student of mine, "Bob", was taking Ama95, a required class that was one of the hardest classes. All exams were take home, open book, open note, but with a time limit of 2 hours. There was no proctoring, and nobody would know if one took extra time or not.
Bob took the exam to his dorm room, closed the door, and set the timer at 2 hours. He had been up late studying, and fell asleep. The timer woke him. He figured he'd been asleep for an hour. So he drew a line in his blue book, and continued taking the test for another hour. He then wrote an explanation of the line and what had happened, and turned it in.
He received an F. The professor was very apologetic, but explained that he had no choice.
Bob received the news with equanimity, and signed up to take the class again next year. He related this story in a matter of fact manner to a group of us in the dorm library.
The thing about the honor system is it turned the students and professors into collaborators rather than adversaries. The students liked the honor system very much. If their best friend cheated, they'd turn him in. Hence, any attempt at organized cheating meant ostracism. I never saw any of that in my time there.
Nobody stole anything in the dorm that I was aware of.
For contrast, I attended a class at a local college. One of the other students befriended me, and it turned out he did that to convince me to help him cheat. (I declined.) A friend of mine attended another university, and the day he moved into his freshman dorm room it was looted.
Why is this any better? It doesn't solve any of the identity and end-to-end encryption problems centralized messengers do; it just changes the underlying connectivity model, which is the least interesting part of the system.
What? Don't Cloudflare literally have their own CAPTCHA service? Why are they using reCAPTCHA?
D doesn't allow pointer arithmetic in @safe code. At first it seems like that cannot work, but it works very well. Pointer arithmetic is relegated to functions that are @system.
The reason it works is because D has actual array types.
If you choose to use automatic memory management with D, you are memory safe.
Yu-Gi-Oh cards are still a thing? That dates from 30 years ago.
I just looked at Cabbage Patch dolls on eBay. The bottom has finally fallen out of that market. Used to see asking prices over $1000. Now they're all around $25.
> In a functioning system the U.S. Supreme Court would step in and check the power of all legislatures to gerrymander
Based on what authority, and according to what standards? In Rucho v. Common Cause, the Supreme Court's holding was based on the premise that it lacked legal standards it could use to judge whether a map was gerrymandered or not. Researchers in that case proposed various mathematical approaches for creating compact districts, but the Court found that there wasn't an approach that would distinguish permissible from impermissible gerrymanders.
Subsequent research largely bore out that premise. https://gking.harvard.edu/compact/ ("The US Supreme Court, many state constitutions, and numerous judicial opinions require that legislative districts be 'compact,' a concept assumed so simple that the only definition given in the law is 'you know it when you see it.' Academics, in contrast, have concluded that the concept is so complex that it has multiple theoretical dimensions requiring large numbers of conflicting empirical measures.").
Legislatures today can use software that creates biased maps while meeting compactness criteria: https://journals.library.columbia.edu/index.php/stlr/blog/vi.... How do courts strike down maps as gerrymandered when you can use software to generate a variety of maps with very different partisan leans that all measure reasonably compact mathematically?
> Court has instead chosen to fan the flames by reducing barriers to gerrymandering. (whether racial or political party based)
Your characterization of Louisiana v. Callais is backward. That case struck down a racially gerrymandered map. In Louisiana v. Callais, the legislature originally drafted a pretty straightforward map: https://commons.wikimedia.org/wiki/File:2025_Louisiana_congr.... The district court then ruled that the map had to be more gerrymandered, to create a second majority-minority voting district: https://commons.wikimedia.org/wiki/File:2025_Louisiana_congr.... If you judge the maps according to mathematical compactness criterion, the additional majority-minority district in the second map totally flunks that test.
That is an area where the Supreme Court does have a concrete standard by which to judge whether maps are racially gerrymandered. Was race explicitly used to create the map? Then it's an unlawful gerrymander.
Reads kind of sales-pitchy. Every day we see another actively exploited Linux LPE; have you thought about your SBOM today?
We have a huge problem.
The US is at war. Much of the world is at war at the cyber attack level right now. The US, the EU, most of the Middle East, Israel, Russia... Major services have been attacked and have gone down for days at a time - Ubuntu, Github, Let's Encrypt, Stryker. Entire hospital systems have had to partially shut down.
Now, in the middle of this, AI has made attacks much faster to generate. Faster than the defensive side can respond. Zero-day attacks used to be rare. Now they're normal.
It's going to get worse before it gets better. Maybe much worse.
Human coders have the same problem too - oftentimes the most important question that future maintainers have of the code is "Why was this decision made?", but that's not captured anywhere in the code itself.
The right place for this is usually in the design doc or commit message, and robust engineering organizations will ensure that commits are cross-referenced back to design and requirements docs so you can trace decisions from git blame back to the actual rationale.
The same process also works pretty well with LLMs. Google, for example, is internally championing a process where the engineer has a dialog with the LLM to generate a design doc, oftentimes with an adversarial LLM to poke holes in the design. Once the design is fully specified, the last step is to ask the LLM to turn the design doc into code. This creates a human-readable artifact that traces the decisions that the human and AI collaboratively made, which then can be traced back from the code.
I adore roadside attractions. I have two key sources for finding them:
https://www.atlasobscura.com has a very high bar for inclusion. I fire it up anywhere I visit and see if there's something obscure and interesting to check out.
https://www.roadsideamerica.com has a very low bar - like a rock that someone painted pink and added googly eyes to and now it looks a bit like a pig. Any time I'm on a road trip I keep an eye on this (I use their inexpensive iPhone app) to see if there's anything worth a quick diversion.
This has been a very long time coming and the crackup we're starting to see was predicted long before anyone knew what an LLM is.
The catalyst is the shift towards software transparency: both the radically increased adoption of open source and source-available software, and the radically improved capabilities of reversing and decompilation tools. It has been over a decade since any ordinary off-the-shelf closed-source software was meaningfully obscured from serious adversaries.
This has been playing out in slow motion ever since BinDiff: you can't patch software without disclosing vulnerabilities. We've been operating in a state of denial about this, because there was some domain expertise involved in becoming a practitioner for whom patches were transparently vulnerability disclosures. But AIs have vaporized the pretense.
It is now the case that any time something gets merged into mainline Linux, several different organizations are feeding the diffs through LLM prompts aggressively evaluating whether they fix a vulnerability and generating exploit guidance. That will be the case for most major open source projects (nginx, OpenSSL, Postgres, &c) sooner rather than later.
The norms of coordinated disclosure are not calibrated for this environment. They really haven't been for the last decade.
I'm weirdly comfortable with this, because I think coordinated disclosure norms have always been blinkered, based on the unquestioned premise that delaying disclosure for the operational convenience of system administrators is a good thing. There are reasons to question that premise! The delay also keeps information out of the hands of system operators who have options other than applying patches.
Were the chances than an npm package is crap factored in?
Josh Aas is on the thread. It's a compliance issue, they expect to be issuing shortly.
Cute detail: if you switch to another tab and then back again it shows a banner at the top:
> You left for 6.3 seconds. We noticed.
Have you tried the "use red/green TDD" trick?
I believe that increases the chances of one-shot code working, though it's also possible that it did that against Opus 4.5 and isn't necessary against Opus 4.7 but I haven't spotted the difference yet.
The kernel is a Linux kernel. The userspace is very different from a typical Linux distribution.
Hilarious, but...
CoS is famous for working really hard to charge people who do any kind of vandalism or make trouble for them with hate crimes. So what might be a minor crime if you did it against an ordinary organization might get you in a lot more trouble against the CoS.
40 minutes of video.
This needs a Lock Picking Lawyer attack on this lock. He'd be done in two minutes.
The trouble with this lock is that the removable key contacts the pins. Even though it's isolated from the outside when it's in contact the pins, you do get it back out after contact. So there's potential for impressioning.
A design where there's a level of indirection between the key and the sensing device would be better. Key goes in, and is read and the info stored. Key rotates further, and stored info is tested while the info storage mechanism is isolated from both the outside and the key.
Some locks like that have been built. I saw one with a column of steel balls for each pin. The key raises the columns of balls, depending on the bitting. The number of balls that are raised above the shear line then varies for each cylinder. That's the information storage device. As the key is rotated, the raised balls become isolated from the keyway. Then, protected from outside access, the columns of balls act as the key for an ordinary pin tumbler setup.
> So many security fixes are coming out now that examining commits is much more attractive: the signal-to-noise ratio is higher
Why?
> Additionally, having AI evaluate each commit as it passes is increasingly cheap and effective
This is the key. With AI, the “people won't notice, with so many changes going past” assumption fails.
Navin, don't leave a comment 'selling' the content. It's a good way to get people to assume it's a spam submission. Best to delete this and re-submit.
You laugh, but it's a real problem. In this case: a WP claim that Stoll was dead would quickly fall to the lack of reliable sources indicating his death; naturally, there's a WP policy for this.
Writing 4 consecutive pixels at 0x0 stores those in video memory 64KB apart.
I don't think "64KB apart" would make much sense either, especially because of the flexibility of the VGA memory controller described in the article; they end up in 4 separate 64KB planes. Unless you're referring to the linear view of the framebuffer that post-VGA GPUs use, in which case the mapping between the planes and LFB can differ considerably between implementations: https://stackoverflow.com/questions/36269239/meaning-of-byte...