What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
Do they not have mice and rats there? This looks like a place those creatures would nest long before a bird got to it.
The author highlights an interesting point. There are a couple variables in action:
A- The difficulty to publish the tool
B- The difficulty to create the tool
C- The usefulness of the tool to others
D- The social reward for publishing the tool
E- The negative incentive of adding a dependency
Difficulty to find a canned solution goes up with A (because someone needs to create it) and B (someone needs to figure out how to publish it), but, the more useful it is to the community (C), the easier it gets to find it, because people will tell you.
If A and B are substantially different, if A is much higher than B, people will tend to write their own and forget about it. If B is lower, there will be fewer solutions to your problem. If A and B are low, and the social reward (C) for A is higher than the price of depending on something else (E), you'll have a leftpad situation. A lot of NPM is made of packages with high C and D and low E.
In the case of Emacs Lisp, A used to be high, but now is low, B (once you climb the learning curve) is low, C, D, and E aren't high either way. This can lead to a scenario where you build the tool before you even look if there is a tool that does it (unlike it is with VSCode, and with Eclipse before it - both have a high B).
I see a thesis someone younger than me will want to bring out to this world.
For years my favorite hackathon kit has been a tablet + cheap bluetooth mouse + cheap bluetooth keyboard. It could be an iPad or an Amazon Fire tablet so long as it can run an RDP client and I can log into my home computer or a big cloud machine.
Also, if the game is single-player, you don't care: Simply let the players enjoy the game how they want to enjoy it.
A do nothing C program (int main() { return; }):
$ time ./a.out
real 0m0.002s
user 0m0.000s
sys 0m0.002s
A do-nothing Go program: $ time ./tmp
real 0m0.002s
user 0m0.000s
sys 0m0.003s
I don't believe Go has any optimizations to not start its runtime if it isn't necessary, but when I added spawning a goroutine that immediately blocks on a channel read that will never come the numbers didn't change. That doesn't really time the runtime. Probably the program terminated before the goroutine was scheduled to run anything. It just makes it so there definitely wasn't an early exit because the compiler or the runtime "realized" it didn't need to start the runtime.I'm sure the Go program is somewhat slower to start and end than C, and that we're running into the limits of how quickly processes can be spawned and other timing overhead which is obscuring the difference. However for practical purposes, "it starts up in less than the overhead for starting a process in the shell" is the same speed for most purposes.
Not even a "do nothing" Python program, no Python program at all:
$ time python3 -c 1
real 0m0.012s
user 0m0.008s
sys 0m0.004s
If you had a Go program that was slow to start up, it was your program, not Go. By contrast, Python, and the dynamic scripting languages in general, can be quite slow to start up, just in the reading and compiling of the code. (Even .pyc files, IIRC, take processing, just less processing than Python source code... it's still nowhere near "memory map it in and go" as it is for statically-compiled languages.)
Option 3: Elon takes over the Federal government, causes some major security incidents, and cuts off USAID stranding a number of Federal employees and cutting off short term food support for hundreds of thousands of people depending on it.
Option 4: Elon takes over a social network and tries to Orbanize the West with it.
But they’re confidently asserting a whole bunch of specific made up reasons this is shittier than a real Monet.
It’s like the sommeliers who can’t detect red vs. white wine when blindfolded.
> where we identify public servants with strong technical aptitude across government, bring them into dedicated product teams
> The team’s approach was straightforward. Build working software fast. Put it in front of real users early. Collect feedback. Fix things quickly. Release updates every two weeks.
> That’s a 95% cost reduction. Both systems instead of one. Delivered faster. With 643 users already on the platform
This is a proven solution. These parts, the non-AI management ones, are proven to work in all sorts of places. Gov.uk is another example.
However, there's one massive problem with this: it doesn't involve the free market and it doesn't make any money for corporations to feed back to politicians in campaign donation kickbacks. It even involves respecting civil servants - maybe even paying them market wages! These parts are so heretical that most governments would choose the solution that 10X more expensive and also doesn't work, every single time.
The secret ingredient is money. Money makes everything possible. Money can materialize energy and water from nowhere.
(well, it can't, but it allows you to buy them off poor people, who don't matter)
ty, I accidentally submitted the url with anchor
I think mature sysadmins accept there's a certain .. bushido to their security-critical role. It is after all their job to respond to security threats, including by revoking credentials, and to recognize that they might fall on the wrong side of that some day.
But things are different both in small companies, and non-US environments where minimum notice periods or redundancy consultations are a thing. You may put people on "gardening leave" where they're still paid but not actually working. Or it may be the case that the sysadmin is the one person who knows and controls a lot of stuff, and the employer has ended up relying on them for a smooth handover. Password and role management for the "root" of things is a real problem.
Coding on 8 and 16 bit home computers still required some skills that most vibe coders certainly lack.
No, it's just a cash replacement, the trust lies in expecting people to make the payment in the first place rather than just steal unattended goods.
It takes up more space and costs more (connectors are surprisingly expensive), as well as adding an electrical overhead, while most (yes, not all) customers don't take advantage of it.
Meanwhile in Korea:
https://www.tomshardware.com/tech-industry/south-korean-offi...
https://www.tomshardware.com/tech-industry/sk-hynix-employee...
SK Hynix is making an absurd amount of money from the RAM shortage, and the employees are not unreasonably demanding their cut from it.
It's the same, on steroids.
No article with that title. Flagged.
"AI safety", as defined here, has most of the problem that "fact checking" for social media had. Many of the same problems the "woke" concern about "microagressions" had. Most of the techniques used in advertising. Much of what passes for political discourse today has the same problems. It's somewhat convincing bullshit.
Should AIs be held to a higher standard than X/Twitter? Than Reddit? Than Fox News? What censorship is appropriate? And, yes, alignment is censorship.
Then there's the big problem of chatbots telling you what you seem to want to hear. This is an old problem. "Happy Talk", from South Pacific", is the entertainment version. "Wartime" by Paul Fussell, is the serious version.
As the article points out, a small percentage of the population is very vulnerable to certain types of misinformation. It may be the same fraction of the population that's vulnerable to cults. But maybe not. Cults have a group self-reinforcing mechanism and an agenda. Chatbots have neither. Worth studying.
The point here is that restrictions on chatbots strong enough to protect the vulnerable would close off most political and social discourse.
Do you think there was ethnic favoritism going on?
It has a direct impact on the amount of emails and slack messages I get to reply to.
Yes, Rust should not be used for everything, like any other language shouldn't, but some devs only see nails.
Example Amazon uses Rust mostly for actual systems programming, writing hypervisors and cloud infrastructure low level services.
They have plenty of other stuff in there as well.
This is why you owe nothing to your employer, record revenue with management bonus, and layoffs for those that helped get there.
Those extra hours? Only if the team really needs them.
Naturally this tends to be something only seniors see, thus ageism.
Bell System historical video on this.[1] This is the popular version, but it's not bad. The more technical version [2].
There's a reason we're not reading monospaced here
You underestimate the number of HN users who are reading this site in their terminal. ;-)
Lots of products have the same fraud/chargeback dynamics are are similar disfavored by payment processors.
> Have you happened to purchase anything in the past 12 months, and looked at the Fed's inflation numbers?
The Fed doesn't issue inflation numbers. The usually cited headline inflation numbers (CPI) are from the Department of Labor’s Bureau of Labor Statistics, the ones used by the Fed as an input to monetary policy decisions (PCE) are issued by the Department of Commerce’s Bureau of Economic Analysis.
>Nearly 40% of Stanford undergraduates claim they’re disabled. I’m one of them
https://www.thetimes.com/us/news-today/article/40-percent-st...
I’ll note that “spectacular” regularly describes shitty things.
Like 9/11, by a national news outlet: https://www.washingtonpost.com/nation/2021/09/10/september-1...
"There is no independent audit, no time series, no disclosed methodology, so we have no idea whether the real figure is higher, whether it is growing, or how it compares across the other frontier models, none of which publish equivalent data."
Tip for writers: aggressively filter out the "no X, no Y, no Z" pattern from your writing. Whether or not you used AI to help you write it's such a red flag now that you should be actively avoiding it in anything you publish.
"Cheating" was pointless, because everyone else in the room was struggling just as hard as you were.
That reminds me of what an instructor (one of the best ones I've had) said a long time ago in response to one of my classmates asking if the exam could be open-book: "I could make it so, but it's not going to get any easier." The same instructor also responded to another question with "it doesn't mean I won't change the length of the exam."
Being a high trust society isn’t the same thing as being a fully egalitarian society.
Getting to “high trust for the majority” is the 0 to 1 of civilizational development. Most societies never get there—they’re low trust for everyone.
This is an article about nerds writing nerd software.
And the doc is also spending a shocking amount of their time on the phone yelling at the insurance company flacks, as a bonus.
Show me the more recent NHE table where this effect shows up and I'll be ready to have the conversation, but right now this seems like a dodge. Whatever effect you're describing, if it's material, has to have started after the NHE data I just posted, from 2023. I don't remember thinking that the health system in 2022 was good.
Fun thing about the NHE: you can project it as far back as you want. The data is there.
Participants were 529 (289 men, 234 women, and 6 identified as other) undergraduate business students with a mean age of 18.14 years (SD = 1.19, range 16 to 37).
Sigh. A sample of convenience. Psychology remains the study of undergraduates.
If they wanted real answers, they'd go to bike events.
This is not obvious at all.
Loyalty is a fundamental moral principle. Loyalty to a friend carries a lot of moral weight. Humans are a social animal, and loyalty to a friend can easily outweigh loyalty to some abstract institution. Like, my friend will still have my back five years from now. The university I went to won't do shit for me.
Like, if you're talking about loyalty to a friend who wants you to cover up an unjustified murder they committed, then I think most people will say the value of telling the cops about the murder outweighs the loyalty to your friend.
But for cheating on some test where probably 30% of the other students are cheating anyways? I think the vast majority of people will say that loyalty to your friend is the more important moral principle here. We all make mistakes in life, and the whole idea of loyalty and love to a friend is that we support them even though they make mistakes. As long as the mistakes are common mistakes like cheating on a test or cheating on a boyfriend, as opposed to things like felony crimes.
When you lose access to your projects, does Anthropic acquire the intellectual property? It's a real issue when it's in a machine learning system, not passive storage like Github.
>As far as I know, java has 7 GC implementations, none of which are perfect, all of which have drawbacks
Compared to Python's, all of them are beyond perfect. And 99.9% of the time you don't even need to use anything but the default.
The startup I'm at (ersc.io) is working in this space (version control more than the IDE side of things), because, in my opinion, there just plain isn't any.
>responding to incoming emails / voicetexts.
You need an AI for that?
The idea that America had “goodwill” in other countries before Trump is laughable. Where? Latin America? Africa? In the Muslim world? We bombed the hell out of all those places long before Trump. This most recent Iran war has generated less outrage in the Muslim world than the war against Iraq 20 years ago.
American foreign policy since the 1950s, fixated on fighting communism and then terrorism, has meddled with so many foreign countries that it’s silly to talk about “goodwill” towards America. That is not to say goodwill matters. Clearly the U.S. has done great without it.
Manliness is the confounding factor.
> Muneeb and Sohaib Akhter, now both 34, had been in trouble before. Back in 2015, the brothers pled guilty in Virginia to a scheme involving wire fraud and computers. Muneeb was sentenced to three years in prison, while Sohaib got two.
After their stints in jail, the brothers worked their way back into the tech world. In 2023, Muneeb got a job with a Washington, DC, firm that sold software and services to 45 federal clients; Sohaib got a job at the same company a year later.
What in the actual fuck. I'm all for giving people second chances. But maybe some ringfencing?
That's what happens when the storage tanks fill up. Nobody can buy it because they have to accept delivery and put it somewhere.
Yes.
"Medicare Advantage" = HMO. All the usual HMO problems.
The best Medigap plan is Plan F, which is no longer available to new subscribers. "Discontinuation of Medicare Plan F was a strategic decision aimed at promoting responsible healthcare spending and ensuring the financial sustainability of the Medicare program." It covers just about everything Medicare doesn't pay, including the various deductibles Medicare has. If Medicare covered Medicare's part, the Plan F provider has to pay their part. They don't get to question it. I don't even see hospital bills, just statements that it's been paid for.
Plan G is one step down from that.
It might be more likely that it cannibalizes used Macbook Air sales.
Combined with the increasing acceptance of shoplifting [1] and unprecedented corruption and criminality among our national leaders, it's hard not to read this as a moral page turning on American culture.
[1] https://www.theatlantic.com/ideas/2026/04/hasan-piker-jia-to...
Oh, they really don't dogfood Windows development any longer, regardless of the incentives.
I have my WinRT 8, UAP 8.1, UWP 10, Project Reunion, .NET Native, C++/CX, C++/WinRT, XAML Islands, XAML Direct, WinUI 2.0, WinUi 3.0, WinAppSDK and what not scars to prove how they aren't dog fooding any piece of it in any meaningful manner.
Heck they keep talking about C++ support in WinUI 3, as if the team hasn't left the project and is now playing with Rust instead.
They managed that plenty of early WinRT advocates became their hardest critics, while not believing anything else they put out, like now this Windows K2 project.
We weren't talking about whether the registry was better or worse, we were talking about how similar the two OSes were.
I run Steam on Ubuntu with a "GeForce RTX 2070 SUPER" (according to lspci), and while it generally works it has some weird issues with gaming in Linux. Some games end up with what feels like ~200ms latency for no apparent reason, and frame rates on some things like Just Cause 3, which I ought to be horribly overspec'd for (a 2015 game) run comfortably, but just barely, which really isn't right. And Persona 5 gets about 2 frames per second in Linux. My Steam Deck pushes it at 60 at 720p with no problem, and I think was pushing out 1080 at one point quite playably, and I think I benchmarked my PC at ~6 times more powerful than my Steam Deck.
Whereas the AMD-based Steam Deck always does what it should do.
In fact he's the opposite of frightening.
It's a win, but the Salt Lake City Tribune is mostly Utah news.
Who doesn't have a paywall now? Fox News. This is a problem.
There's been real progress. Wine's memory allocator had an architecture with three nested locks. "Realloc" held a futex lock on the memory allocator while recopying the buffer. Multiple threads doing allocation could go into futex congestion, with many threads looping on the futex. This made Vec::push in Rust insanely inefficient. Some of my programs dropped from 60FPS to about 0.5 FPS.
Fixed in Wine 11.0. Thanks to the Wine team.
Not sure if this was related to NTSYNC, but Wine's locking infrastructure definitely got an overhaul.
That's probably why they are two of the most powerful men currently in existence.
When there is something genuinely acknowledged as being valuable - and a $900B company certainly qualifies - people are going to fight over it. Only natural, because in most cases the way to get power is to fight for things that will make you powerful. Just look at the history of Facebook or Twitter or Google Chauffeur/Waymo or Cisco or the U.S. presidency.
When you get wealth and power without fighting, it's usually because you managed to identify something that would eventually make you powerful without anyone else realizing that it's important, until you become too big to overthrow. This is the story of the Google founders or E-bay or Github or...I can't really think of others, it's a pretty rare path to success. Either that, or seem non-threatening and mild-mannered enough that nobody attacks you and then be the last one standing after all the combative types have destroyed each other, like how Sundar got to be CEO of Alphabet or Bran Stark won the Seven Kingdoms.
... and you reward them for it?
Ouch! We get told at least 10x what a genius this guy is before we even are told what he did.
Yes, and are in the position of maintaining their own forks if needed, or doing reviews when updating them.
Is everyone else?
Many sites are talking about this, but here's the actual video.
Related announcement from CEO on X: https://x.com/adcock_brett
It's a humanoid robot taking bags and boxes from an incoming chute and facing them label-side down. About an hour and a half in so far. No commentary.
>a man who I praised fulsomely in a blog post 20 years ago, for his coverage of the genocide in Darfur
But apparently don't care for him when the genocide hits closer to home.
A national public payment processor in the US would not be more immune to political pressure from religious moralist groups than a duopoly of private processors.
For evidence see, well, all the other institutions of the US federal government.
> Muneeb Akhter asked Sohaib Akhter for the plaintext password of an individual who submitted a complaint to the Equal Employment Opportunity Commission’s Public Portal, which was maintained by the Akhters’ employer. Sohaib Akhter conducted a database query on the EEOC database and then provided the password to Muneeb Akhter.
WTF?
Was there 2009-2014 and then again 2020-2026. I think there are a lot of aspects of IDE use and culture at Google that this post omits.
My recollection from 2009-2011 is that emacs and vim were the dominant editors (just as the TV show Silicon Valley depicted), and there was a decent-sized minority using Eclipse and Intellij, both of which had official support for Google tooling. The command line still largely ruled though, even though the official Google developer workstation was Goobuntu, Google-flavored Ubuntu. This reflected the overall developer population of the time.
I think Cider actually was invented a little earlier than the article describes. I have vague memories of some engineers experimenting with web-based IDEs that would integrated directly with Critique (the code-review software) as early as 2013-2014. Its use was not widespread when I left in 2014; there was still the impression that it wasn't powerful enough for daily driving.
When I came back in 2020, emacs/vim use was much lower, again probably reflecting differences in the general population of developers. Many more of the developers had been trained in the post-2010 developer ecosystem of VSCode, IntelliJ, etc, and this was reflected in tool usage at Google too. I'd say IntelliJ was the dominant IDE, with Cider a close second and Cider-V just starting to take market share. You still had to pry emacs and vim from a grizzled old veteran's hands.
By 2022 I'd transferred to an Android team, and Android Studio with Blaze was the dominant IDE, even as general IntelliJ usage in the company was falling. Cider just didn't have the same Android-specific support. Company-wide Cider-V was growing the fastest, taking market share from both IntelliJ and Cider-V.
By 2024 Cider-V was dominant and there started to be a concerted push to standardize on it, particularly since new AI agent tools were coming out and they couldn't be supported on all editors that Googlers wanted to use.
As of my departure in 2026, the company-wide push was to standardize on Antigravity [1], which, as I understand it, won a turf war within the developer tools org and got blessed as the "official" Google AI coding agent. This also has the effect of concentrating developer time dogfooding Google's external AI coding offering, which hopefully should improve its quality. There's still significant Cider-V usage, but it's dropping, and execs are pushing Antigravity hard.
Looks interesting, curious what your moat here is. What prevents Supabase/Neon from doing this? Actually don't they already do this? How does this differ from the branching Neon and Supabase already offer?
I honestly thought it’d be wind that would make a dent in generation before solar, but I guess I was wrong.
This isn't a problem that can be solved by a clever entrepreneur. This is what government is for. When you have a shared resource that everyone needs, government is the best option for making sure its distribution is fair.
We already know how to solve this: make transmission owned by the government, make generation free-market. Cities do this already. The city of Santa Clara owns all the transmission, and then buys power on the open market along with generating themselves.
The result is their power costs 1/2 as much as all the surrounding cities that have PG&E.
Circa 2009 or so I was interested in automated link building systems, there were some sites that had no defenses, but I saw enough going on around Reddit that I just didn't want to mess with it.
The enemy is both strong and weak.
The initial release of dnsmasq was in 2001. The list of viable languages for a high-performance network server at the time was still not all that long. Erlang wasn't on it. Too big a performance hit, too much opaque runtime that may not have been stable at the time, too few contributors, big dependency footprint of stuff most things wouldn't have installed. (When I used Erlang for a production system in more like the 2015 time frame it still had rough corners if you weren't using it exactly for the use case it was meant for.) This isn't specially a criticism of Erlang, it would have been like this across many languages and runtimes.
A lot of these systems that are getting hit, and will probably continue to be hit over the next few weeks or months, have a similar story. The Linux kernel's only other potentially viable choice was C++ at the time. OpenSSL, a perennial security offender, was started in 1998. You can look up your own favorite major system library with major security issues and it's probably the same story.
I'm as aggressive as anyone about saying "don't write a new project in C for network access", but cast me back to 1998 and I couldn't tell you what other viable choices there are either. There are safer languages, but they were much, much smaller than the C community, and I couldn't promise you how stable they were either. Java was out, and I don't know when to draw the exact line as to when it became a serious contender for a network server, but late 200Xs would be my guess; certainly what I saw in 1999 wasn't yet.
Example: I ran a Haskell network server in 2011 for something relatively unimportant and it fell over under conditions that would not have been very extreme for a production network; I know it was Haskell and not my code because I reused the same code base in 2013 with no changes in the core run loop and it did about 90% better; still not enough that I would have put that system into a real production use case but enough to show it wasn't my code failing. So while Haskell may have existed in the 200Xs, it wouldn't have qualified as a viable choice for a network server at the time.
There's a lot more viable choices today than there used to be.
Rust’s async makes some design decisions that make it a unique feature: no other language has zero allocations to do async, for example. (In C++’s version you can get it to do no allocations if you do certain things, like making the required allocator a no-op, in my understanding, but it conceptually requires a call to an allocator)
This makes it suitable for a much wider variety of tasks than other languages with similar features, but does mean that there are more details that you need to care about than in other languages that are higher level.
This means it is controversial: some people would prefer a higher level experience, but for those who do use it for its full range of tasks, it’s great.
There are some rough edges, but it’s just a feature that, even outside of Rust, some people just fundamentally dislike. So it draws a lot of heat from all sides.
It is also probably the single largest driver of adoption of the language. Rust started truly taking off once it landed.
That's not the reason. Cost of chargebacks falls entirely on the merchant. Visa/MC have no reason to care.
Has Mir in the past ever implemented any kind of bans or restrictions for specific vendors or use cases?
October 2021, Mastercard unilaterally imposes additional constraints on adult sites: https://www.commercegate.com/mastercard-issued-an-updated-se...
August 2021: OnlyFans CEO Blames Porn Ban on 'Unfair Actions' of Banks, Media: https://www.pcmag.com/news/onlyfans-ceo-blames-porn-ban-on-u...
April 2024: Japanese Adult content platform DLsite disables Visa/Mastercard payment after attempt to outsmart credit card companies: https://automaton-media.com/en/news/dlsite-disables-visa-mas...
It's not a sudden new thing. The financial theory seems to explain all the facts.
In 2026 it is probably not theoretical physics. Around 2000 or so the dam broke and there have been numerous routes to quantum gravity which are plausible from a calculational point of view... but not a lot of experiments to rule them out except for those that are the most blatantly Lorenz violating. Until we see sparticles, proton decay, and the like, we can't say that much about the whole world of GUT and Strings that we're stuck in right now.
Raptor is a thing of beauty: https://sxcontent9668.azureedge.us/cms-assets/assets/Raptor_.... Look how polished it is. It looks like a fucking Apple product.
The Russians were really good at aerospace. It's a testament to their engineering that it took this long to advance past where they were in the 1970s. I love this video describing the development from the Russian RD270 all the way to Raptor: https://x.com/Erdayastronaut/status/1204179086823825408.
"Dutch suicide prevention hotline shares visitor data with tech companies" is certainly one way of saying "Dutch suicide prevention hotline website uses Google Analytics".
The no-GIL work (free-threading) is unrelated to this incremental GC work.
Free-threading actually uses its own, separate GC: https://labs.quansight.org/blog/free-threaded-gc-3-14
I have so many questions.
How long have you been doing this?
Are you at a product company, a consultancy, a place where technology is an enabler but not core, or somewhere else?
What happens when there are bugs or an outage due to that 3k LoC PR?
Yes, that was the thing that set off all the alarms. Trump 1 said a lot of things but had not completed the Hungary-fication of the US. Trump 1 was also before the Ukraine war!
Context: I grew up with a Commodore 64. So I wasn't around "since the beginning" of BASIC but I've clocked some real time with it.
Looking back on it, I don't think there was anything special about BASIC, from a modern point of view. Historically, yes, very significant language; that's not what I'm saying. What I'm saying is that BASIC's justified reputation for being an "easy" language is very much the product of what the competition was.
On the small side of the computer space, the competition was assembly. Yeah, BASIC is easier than assembly. But that's not really saying much. On the large computer side you have some more competition with C and a handful of other viable languages, but they're all just terrible. You can't get a sense of what 1970s or 1980s C looked like by looking at a modern code base. C style has come a long way. The competition was even worse than it is today. (I'm also talking the "average" code base, not things like the annotated UNIX that does look pretty nice, but is also among the best of the best of the era.) Yeah, BASIC was easier than that, and again, that's not saying much.
It also sacrificed a looooot of capability for that easiness. We've built many languages since then that are easier than C without the massive sacrifice of capability. Some sacrifice, yes, but nowhere near as much. Having more computing resources available has not hurt, that's for sure.
By modern standards it makes a wide variety of terrible and/or baffling decisions. Even in its final form of Visual Basic .Net it was not a great language; if you target the earlier iterations which is where it earned its reputation it has even more baffling decisions. For instance, the memory model the language was built around was basically completely static allocation of everything. It's better to start with a good design then to start with a language that was built around that, then fixed up later.
Based on the author's description of the situation, the easiest and most natural choice is to stick Lua on it. It's a modern language, the AIs know it, there's abundant documentation for it every which way, it's already a de facto standard in the gaming space so the users can be learning skills they can take elsewhere in the space later. If anything the biggest disadvantage is the amount of documentation a user can find that is Lua, but bound to some completely different environment that might confuse the newbie, or written by some other novice giving terrible advice.
But my real point here is that I would tend to discourage starting with BASIC, and having kind of already started, I'd advise dropping it ASAP. It's not a very good building point in 2026 and there are a lot of good ones to choose from.
Well, unless all the planned datacentres get built, using more electricity than some small countries.
> What's being delivered now is, an agent running on someone else's computer, copying your data to someone else's database, with zero responsibility, or mandate to protect that data and not share with with anyone else (in fact, they almost always promise to share it with their thousand partners), offering suggestions and preferences based on someone else's so-called recommendations, influenced by paying the agent's operators, and increasing pressure to make using someone else's computers + agents the only way to interact with other people and systems.
If we're going to have AI regulation, this is where to start. If a company's AI service acts for a user, the company has non-disclaimable financial responsibility for anything that goes wrong. There's an area of law called "agency", which covers the liability of an employer for the actions of its employees. The law of agency should apply to AI agents. One court already did that. An airline AI gave wrong but reasonable sounding advice on fares, a customer made a decision based on that advice, and the court held that the AI's advice was binding on the company, even though it cost the company money.
This is something lawyers and politicians can understand, because there's settled law on this for human agents.
> Basic is just not particular useful and has a lot of funny behavior or missing parts for any serious project.
I think it is an interesting teaching tool. It has a lot of limitations that place it close to the machine level - all variables being global, no real named functions, and so on. It grounds the expectations about what a computer can and can't do - all the fancy things we do are smoke and mirrors layered on top of a very simple machine.
Why send the review to us? Give it to another AI to read.
So they don't have to handle the really hard case.
In x86 land, it's hard to find the instruction boundaries statically, because, for historical reasons going back to the 8-bit era, x86 nstructions don't have alignment restrictions. This is what makes translation ambiguous.
If you start at the program entry point and start examining reachable instructions, you can find the instruction boundaries. Debuggers and disassemblers do this. Most of the time, it works, but You may have to recognize things such as C++ vtables. Debug info helps there. There may be ambiguity. This seems to be about generating all the possible code options to resolve that ambiguity by brute force case analysis.
x86 doesn't have explicit code/data separation, which some architectures do. So they have to try instruction decoding on all data built into the executable. They cull obvious mistranslations. Yet they still have a 50x space expansion, someone mentioned. Most of those will be unreachable mistranslated code.
You can't look at a static executable which uses pointers to functions and say "that data cannot possibly be code", without constraining what those pointers point to. That involves predicting run-time behavior, which may not be possible.
I read those bytes and immediately thought "mov eax, 42; ret".
Appeal to nature is something that definitely cuts across the political spectrum.
It splits the input into adaptively-sized blocks (quanta), runs a competition between many specialized codecs on each block, and emits the smallest result.
This is, for lack of a better term, a "metacompressor", but it will be interesting to see which of the choices end up dominating; in my past experiences with metacompression, one algorithm is usually consistently ahead.
Likewise, I'm also not very demanding of my text editor. I used vi on any *nix systems and Notepad (the original one, not the new bloated monstrosity) on Windows for most of my work. Navigation, basic editing, and searching are probably all I need.
> disallowing archival is practically suicide
The Times alone pulls a multiple of the Internet Archive’s visitors [1][2].
"stealing" is BS because the original still exists. Copyright infringement is more correct.
That's incorrect, or at best misleadingly incomplete. 8 USC §1357(a) authorizes border agents to, "(3) within a reasonable distance from any external boundary of the United States, to board and search for aliens any vessel within the territorial waters of the United States and any railway car, aircraft, conveyance, or vehicle, and within a distance of twenty-five miles from any such external boundary to have access to private lands, but not dwellings, for the purpose of patrolling the border to prevent the illegal entry of aliens into the United States."
The associated regulations, 8 C.F.R. §287.1, interpret "reasonable distance" to mean up to 100 miles from the actual border. But: "In fixing distances not exceeding 100 air miles pursuant to paragraph (a) of this section, chief patrol agents and special agents in charge shall take into consideration topography, confluence of arteries of transportation leading from external boundaries, density of population, possible inconvenience to the traveling public, types of conveyances used, and reliable information as to movements of persons effecting illegal entry into the United States."
The statute and regulation just mean that agents don't need to patrol the border at the literal border--which in some cases runs through the middle of bodies of water. But they must justify the determinations of what's a "reasonable distance" from the border must be based on what's needed to prevent illegal entry into the United States, based on factors such as topography and transportation routes.
Scrcpy is fantastic, no idea how it Just Works™ so smoothly and painlessly, but it does.
In Spain and France, once the legislature approves a transit project, it preempts all other laws and is very difficult to litigate out of existence. Lawsuits cripple our ability to build infrastructure.
Hey, someone submitted my old article. On my birthday!
Oh, people hate it… and even someone I definitely look up to.
You‘re absolutely right, though, I don‘t remember it being that bad, and probably I just read over it when resurrecting the article, because I‘m so familiar with every word.
I‘ll slap some <hr> tags on it when I‘m back home from my holiday.