What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
> 99% of the libre games
i.e. much less than 1% of all existing games.
This looks to me that Deno folks are out of business options and decided to create a distraction instead, of selling us why to use Deno instead of nodejs.
Why is this downvoted, this is not the first time I've heard that opinion expressed and every time it happens there is more evidence that maybe there is something to it. I've been following the DRAM market since the 4164 was the hot new thing and it cost - not kidding - $300 for 8 of these which would give you all of 64K RAM. Over the years I've seen the price surge multiple times and usually there was some kind of hard to verify reason attached to it. From flooded factories to problems with new nodes and a whole slew of other issues.
RAM being a staple of the computing industry you have to wonder if there aren't people cleaning up on this, it would be super easy to create an artificial shortage given the low number of players in this market. In contrast, say the price of gasoline, has been remarkably steady with one notable outlier with a very easy to verify and direct cause.
I still cannot read it without immediately seeing a contraction of eczema.
>It would appear they listened to that feedback, swallowed their ego/pride and did what was best for the Zig community with these edits
They sugarcoated the truth to a friendlier but less accurate soundbite is what they did.
>Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link.
That's the entire point, that AI cheapens the cost of persuassion.
A bad thing X vs a bad thing X with a force multiplier/accelerator that makes it 1000x as easy, cheap, and fast to perform is hardly the same thing.
AI is the force multiplier in this case.
That we could of course also do persuassion pre-AI is irrelevant, same way when we talk about the industrial revolution the fact that a craftsman could manually make the same products without machines is irrelevant as to the impact of the industrial revolution, and its standing as a standalone historical era.
Online shopping is almost 30 years old itself. Before that there was mail order; I have a couple of mid 80s PC mags which are almost entirely adverts for parts.
> Thirty years ago this would have required a desktop app and probably a record label deal.
And that would have been just fine.
That's a pretty typical middle-brow dismissal but it entirely misses the point of TFA: you don't need AI for this, but AI makes it so much cheaper to do this that it becomes a qualitative change rather than a quantitative one.
Compared to that 'russian troll army' you can do this by your lonesome spending a tiny fraction of what that troll army would cost you and it would require zero effort in organization compared to that. This is a real problem and for you to dismiss it out of hand is a bit of a short-cut.
> just smaller maybe
This is like peak both-sidesism.
You even openly describe the left’s equivalent of MAGA as “fringe”, FFS.
One party’s former “fringe” is now in full control of it. And the country’s institutions.
That could be the answer right there: kinship.
And probably never will, because C++ compatibility with C beyond what was done initially, is to one be close as possible but not at the expense of better alternatives that the language already offers.
Thus std::array, std::span, std::string, std::string_view, std::vector, with hardned options turned on.
For the static thing, the right way in C++ is to use a template parameter,
template<typename T, int size>
int foo(T (&ary)[size]) {
return size;
}
-- https://godbolt.org/z/MhccKWocEIf you want to get fancy, you might make use of concepts, or constexpr to validate size at compile time.
Isn't that what Overwatch/Valorant/Apex/Fortnite etc are?
Massive cash burn was an absolutely key feature of the dotcom boom/bust. Admittedly, it never really went away - there's always been a free->enshittification profit taking cycle since then. It's just the scale that's terrifyingly different this time.
Indeed, at a minimum you should be able to enforce that check using a compiler flag.
Ok, interesting because if 'should be written in itself' is a must then lots of languages that I would not consider systems languages would qualify. And I can see Erlang 'native' and with hardware access primitives definitely as a possibility.
'replace C' is a much narrower brief and effectively forces you to accept a lot of the warts that C exposes to the world. This results in friction between what you wanted to do and end up doing as well as being stuck with some decisions made in the 1970's. It revisits a subset of those decisions whilst keeping the remainder. And Rust's ambitions now seem to have grown beyond 'replace C', it is trying very hard to be everything to everybody and includes a package manager and language features that a systems language does not need. In that sense it is becoming more like C++ than like C. C is small. Rust is now large.
Async/Await is a mental model that makes code (much) harder to reason about than synchronous code, in spite of all of the claims to the contrary (and I'm not even sure if all of the people making those claims really believe them, it may be hard to admit that reasoning about code you wrote yourself can be difficult). It obfuscates the thread of execution as well as the state and that's an important support to hold on to while attempting to understand what a chunk of code does. It effectively turns all of your code into a soft equivalent of interrupt driven code, and that is probably the most difficult kind of code you could try to write.
The actor model recognizes this fact and creates an abstraction that - for once - is not leaky, the code is extremely easy to reason about whilst under the hood the complexity of the implementation is hidden from the application programmer. This means that relative novices (which probably describes the bulk of all programmers alive today) can safely and predictably implement complex systems with multiple moving parts because it does not require them to have a mental model akin to a scheduler with multiple processes in flight all of which are at different stages of their execution. Reasoning about the state of a program suddenly becomes a global exercise rather than a local one and locality of state is an important tool if you want to write code that is predictable, the smaller the scope the better you will understand what you are doing.
It is funny because this would suggest that the likes of Erlang and other languages that implement the actor model are beginners languages because most experienced programmers would balk at the barrier to entry. But that barrier is mostly about a lot of the superstructure built on top of Erlang, and probably about the fact that Erlang has its roots in Prolog which was already an odd duck.
But you've made me wonder: could you write Erlang in Erlang entirely without a runtime other than a language bootstrap (which even C needs) and if not to what degree would you have to extend Erlang to be able to do so. And I think here you mean 'the Erlang virtual machine that are not written in Erlang' because Erlang the language is written in Erlang as is the vast bulk of the runtime.
The fact that the BEAM is written in another language is because it is effectively a HAL, an idealized (or not so idealized, see https://www.erlang.org/blog/beam-compiler-history/) machine to run Erlang on, not because you could not write the BEAM itself entirely in Erlang. That's mostly an optimization issue, which to me is in principal evaluations like this a matter of degree rather than a qualitative difference, though if the inefficiency is large enough it could easily become one as early versions of Erlang proved.
Maybe it is the use of a VM that should disqualify a language from being a 'systems language' by your definition?
But personally I don't care about that enough to sacrifice code readability to the point that you add entirely new footguns to a language that aims for safety because for code with long term staying power readability and ability to reason about the code is a very important property. Just as I would rather have memory safety than not (but there are many ways to achieve that particular goal).
What is amusing is that the Async/Await anti-pattern is now prevalent and just about the only 'systems languages' (using your definition) that have not adopted it are C and Go.
This is well-documented, as are the corresponding Chinese ones.
I'm not sure we'd ever outsource thinking itself to an LLM, we do it too often and too quickly for outsourcing it to work well.
That's a risk-return issue. Bezos plays it safe within Amazon, and quite unsafe outside of that. By the time he acquires something from Amazon it is because it has proven long-term revenue generation and the shake-out period is done and consolidation is about to start. With AI the shake-out is still to come. So he can afford to wait to eventually acquire the winner or to copy it if he can't buy it. Having very deep pockets enables different business strategies.
Haven't many of the fast/slow thinking claims, as popularized by the best-selling Daniel Kahneman's book, been debunked?
That's not jaded, that's paranoid-ly misreading this.
It's an organization for hacking working with high schools and young people. They don't want small children enrolled, and they don't want older people.
"teenager 18 and under" is perfectly fine description for 13-18 or 7th to 12th grade.
> Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better.
This is the ultimate hypester’s motte to retreat to whenever the bailey of claimed utility of a technology falls. It’a trivially true of literally any technology, but also completely meaningless on its own.
Sure looks like it. All the comments from this account are just some statements. No opinions or suggestions at all!
You’re spot on, people are confusing economics and cultural assimilation concerns with racism. I don’t believe you’re racist (based on your comments), you’re just pointing out a loophole that allows mass migration via a skilled worker visa program (whether intentional or not) of a cohort of immigrant who might experience challenges assimilating into their target country culture.
Culture isn’t magic land, it’s people, and their social values and norms. Are mass migrants via family reunification assimilating and adopting the culture of their new home? That’s an important question for those offering residency and potentially a path to citizenship.
Very similar to why many European countries are offering economic migrants a return path to their country of origin when social and economic integration has failed.
They're using Column (https://column.com/) under the hood, so more like Stripe (payments + Atlas) for non profits I think? Still very powerful and material value of course on top of the banking partner primitive.
Thank you! Instant relief for eyes and brain.
They basically have been slowly rolling it back with every update, probably because doing it all in one go would have been too much bad PR. iOS 26 right now looks very different from the one that was shown off on stage in June.
https://www.mooreslawisdead.com/post/sam-altman-s-dirty-dram...
On October 1st OpenAI signed two simultaneous deals with Samsung and SK Hynix for 40% of the worlds DRAM supply... the shock wasn’t that OpenAI made a big deal, no, it was that they made two massive deals this big, at the same time, with Samsung and SK Hynix simultaneously! In fact, according to our sources - both companies had no idea how big each other's deal was, nor how close to simultaneous they were. And this secrecy mattered. It mattered a lot.
Had Samsung known SK Hynix was about to commit a similar chunk of supply — or vice-versa — the pricing and terms would have likely been different. It’s entirely conceivable they wouldn’t have both agreed to supply such a substantial part of global supply if they had known more...but at the end of the day - OpenAI did succeed in keeping the circles tight, locking down the NDAs, and leveraging the fact that these companies assumed the other wasn’t giving up this much wafer volume simultaneously…in order to make a surgical strike on the global RAM supply chain…and it's worked so far...
OpenAI isn’t even bothering to buy finished memory modules! No, their deals are unprecedentedly only for raw wafers — uncut, unfinished, and not even allocated to a specific DRAM standard yet. It’s not even clear if they have decided yet on how or when they will finish them into RAM sticks or HBM! Right now it seems like these wafers will just be stockpiled in warehouses – like a kid who hides the toybox because they’re afraid nobody wants to play with them, and thus selfishly feels nobody but them should get the toys!
But only for constant size arrays.
You could just declare
struct Nonce {
char nonce_data[SIZE_OF_NONCE];
}
and pass those around to get roughly the same effect.
That's true, but we already know that a bunch of stuff about the universe is quantized. The question is whether or not that holds true for everything or rather not. And all 'fully continuous models of computation' in the end rely on a representation that is a quantized approximation of an ideal. In other words: any practical implementation of such a model that does not end up being a noise generator or an oscillator and that can be used for reliable computation is - as far as I know - based on some quantized model, and then there are still the cells themselves (arguably quanta) and their location (usually on a grid, but you could use a continuous representation for that as well). Now, 23 or 52 bits (depending on the size of the float representation you use for the 'continuous' values) is a lot, but it is not actually continuous. That's an analog concept and you can't really implement that concept with a fidelity high enough on a digital computer.
You could do it on an analog computer but then you'd be into the noise very quickly.
In theory you can, but in practice this is super hard to do.
> predicting that the uncertainty and discouragement of H1-Bs will lead to a destruction of jobs in the U.S.
Do we have any studies pointing one way or another?
Given AI’s existing effects on junior demand, and the post-Covid normalization of remote work (and thus multi-shore teams), I could see the effect going either way.
[Slaps roof of barge]
You can fit zo many tulips in this bad boy
No, because they already do it for keyboards. There's zero reason they couldn't make it standalone. It's just the keyboard minus the rest of the keys.
In some jurisdictions for some functions, yes. There is no information here on context that would let us evaluate how this might or might not relate to that, making it pretty useless.
OpenSCAD has a very steep learning curve. The big trick is not to think sequentially but to design the part 'whole'. That requires a mental switch. Instead of building something and then adding a chamfered edge (which is possible, but really tricky if the object is complex enough) you build it out of primitives that you've already chamfered (or beveled). A strategic 'hull' here and there to close the gaps helps a lot.
Another very useful trick is to think in terms of vertices of your object rather than the primitives creates by those vertices. You then put hulls over the vertices and if you use little spheres for the vertices the edges take care of themselves.
This is just about edges and chamfers, but the same kind of thinking applies to most of OpenSCAD. If I compare how productive I am with OpenSCAD vs using a traditional step-by-step UI driven cad tool it is incomparable. It's like exploratory programming, but for physical objects.
No, they were not. They required a lot more round-trips to the server though, and rendering the results was a lot harder. But if you think of a browser as an intelligent terminal there is no reason why you couldn't run the application server side and display the UI locally, that's just a matter of defining some extra primitives. Graphical terminals were made first in the 60's or so.
I don't know about cats (I haven't tried training) but my dog definitely knew a few nouns and verbs. She understood "food", "water", "walk", "bone", "ball", "bear" (her toys), and could distinguish between "point", "fetch", and "drop". With "fetch ball" she would go get the ball, whereas with "point food" she would point (paw) at the food, and with arbitrary combinations of these verbs and nouns.
It's astonishing, I didn't think they could do that, but apparently they can.
The Constitutionality of the attacks is orthogonal to their status as war crimes. (The Constitution doesn't concern itself with war crimes beyond the fact that they're crimes. Its writing almost predates the concept.)
What Trump can do without Congressional approval is a constitutional question. Whether it's a war crime is a legal one. I'm not sure how much Palantir can help with the first. I'm fairly certain it would be useful with the latter; Helen Mirren starred in a film that was essentially about this [1].
[1] https://en.wikipedia.org/wiki/Eye_in_the_Sky_(2015_film)
Let's Encrypt did more for privacy than any other organization. Before Let's Encrypt, we'd usually deploy TLS certificates, but as somewhat of an afterthought, and leaving HTTP accessible. They were a pain to (very manually) rotate once a year, too.
It's hard to overstate just how much LE changed things. They made TLS the default, so much that you didn't have to keep unencrypted HTTP around any more. Kudos.
The title of this post is misleading - this was jailbreaking, not prompt injection. The Wired article doesn't mention prompt injection, that was an editorial note in the submitted title.
Prompt injection and jailbreaking are not the same thing: https://simonwillison.net/2024/Mar/5/prompt-injection-jailbr...
Seriously, talk about impact. That one non-profit has almost single-handedly encrypted most of the web, 700 million sites now! Amazing work.
Note: fee was originally $18 when announced last month
You need something like
https://en.wikipedia.org/wiki/2C-T-2
which has strong stimulation and enjoyable hallucinations but does not seem to promote the "cosmic" delusions that become a problem for some people who take other hallucinogens.
https://en.wikipedia.org/wiki/John_C._Lilly reported multiple near death experiences injecting lysergide and extended condolences to https://en.wikipedia.org/wiki/Art_Linkletter for his daughter while maintaining that lysergide is perfectly safe and should be legal.
Timothy Leary maintained lysergide was a human right but his experience was more the healthy experience where you receive a sacrament and experience as universal and not personal and never faced short-term physical harm [1]
I have seen people go the John Lilly route and they often get the idea that they got some special revelation or that they are the antichrist [2] and they often have bad outcomes.
With 2-C-T-2 I am sitting on the toilet and I am a constipated sinner or hike briskly through a storybook forest trailing sparkles and feel that's a satisfying sacrament and it doesn't go further than that.
[1] see https://pmc.ncbi.nlm.nih.gov/articles/PMC10869618/ for long term
[2] as opposed to "an antichrist" as in 1 John 2:22-23
Does it really matter if they are constitutional or not when there's zero penalty for committing them?
A lot of opinionated rage and very little information about what actually happened in this posting.
Zuck's shopping spree continues
I wonder when games will start supporting Linux natively, especially after the Steam Machine is released.
Correct. We should pay Congress critters a million dollars a year like in Singapore and then require them to hold all assets in a blind trust.
> I remember this same sentiment towards AI when I was growing up, but towards cell phones...
Sure. But the same for NFTs.
We'll see which one this winds up being.
This is what people used to use Google for; I remember so many times between 2000-2020 that Google saved my bacon for exactly those things (travel plans, self-diagnosis, navigating local bureaucracies, etc.)
It's a sad commentary on the state of search results and the Internet now that ChatGPT is superior, particularly since pre-knowledge-panel/AI-overview Google was superior in several ways (not hallucinating, for one, and being able to triangulate multiple sources to tell the truth).
Music coding technology has been around a long time - think of tools like csound and pd and Max/MSP. They're great for coding synthesizers. Nobody uses them to do songs. Even Strudel has tools for basic GUI components because once you get past the novelty of 'this line of code is modulating the filter wowow' typing in numeric values for frequency or note duration is the least efficient way to interact with the machine.
Pro developers who really care about the sound variously write in C/C++ or use cross compilers for pd or Max. High quality oscillators, filters, reverb etc are hard work, although you can certainly get very good results with basic ones given today's fast processors.
Live coding is better for conditionals like 'every time [note] is played increment [counter], when [counter] > 15 reset [counter] to 0 and trigger [something else]'. But people who are focused on the result rather than the live coding performance tend to either make their own custom tooling (Autechre) or programmable Eurorack modules that integrate into a larger setup, eg https://www.perfectcircuit.com/signal/the-programmable-euror...
It's not that you can't get great musical results via coding, of course you can. But coding as performance is a celebration of the repl, not of the music per se.
A sizable fraction of current AI results are wrong. The key to using AI successfully is imposing the costs of those errors on someone who can't fight back. Retail customers. Low-level employees. Non-paying users.
A key part of today's AI project plan is clearly identifying the dump site where the toxic waste ends up. Otherwise, it might be on top of you.
> It is against the law to prioritize AI safety if you run a public company. You must prioritize profits for your shareholders
This is nonsense. Public companies are just as free as private companies to maximise whatever sharedholders wants them to.
All big corporate employees hate AI because it is incessantly pushed on them by clueless leadership and mostly makes their job harder. Seattle just happens to have a much larger percent of big tech employees than most other cities (>50% work for Microsoft or Amazon alone). In places like SF this gloom is balanced by the wide eyed optimism of employees of OpenAI, Anthropic, Nvidia, Google etc. and the thousands of startups piggybacking off of them hoping to make it big.
Has anyone calculated or measured the input lag of ADB vs other protocols such as PS/2 or USB? This is unfortunately hard to search because most references on the web to ADB are for the Android Debug Bridge.
From the numbers given, it seems like ~2ms to send a packet (my math may be off), which is quite good when compared with other contemporary/modern protocols (see: https://danluu.com/input-lag/ for examples)
Should countries have a upper limit on the ratio of server:client memory supply chain capacity? If no one can buy client hardware to access the cloud, how would cloud providers survive after driving their customers to extinction?
It shouldn't be possible for one holding company (OpenAI) to silently buy all available memory wafer capacity from Samsung and SK Hynix, before the rest of civilization even has the opportunity to make a counteroffer.
It's safe to assume that a company like Anthropic has been getting (and rejecting) a steady stream of acquisition offers, including from the likes of Amazon, from the moment they got proninent in the AI space.
Just buy QQQ. She doesn't have a magic strategy. In fact, she probably underperforms given how long she is in tech and how leveraged her trades are (people forget she's married to a venture capitalist).
> AI-powered map
> none of it had anything to do with what I built. She talked about Copilot 365. And Microsoft AI. And every miserable AI tool she's forced to use at work. My product barely featured. Her reaction wasn't about me at all. It was about her entire environment.
She was given two context clues. AI. And maps. Maps work, which means all the information in an "AI-powered map" descriptor rests on the adjective.
In European consulting agencies the trend now is to make AI part of each RFP reply, you won't go through the sales team, if AI isn't crammed there as part of the solution being delivered, and we get evaluated for it.
This takes all the joy away, even traditional maintenance projects of big corps seems attractive nowadays.
Wow. They're not selling off the business, they're totally exiting it.
This is a big loss. Crucial offered a supply chain direct from Micron. Most other consumer DRAM sources pass through middlemen, where fake parts and re-labeled rejects can be inserted.
dang (the head moderator of Hacker News) has said multiple times that HN prefers human-only comments.
It is not at all my experience working in local government (that is, in close contact with everybody else paying attention to local government) that non-tech people hate AI. It seems rather the opposite.
I can't believe that any company takes a month to ship something. Even if they don't have CI, surely they'd prefer to break the app (maybe even completely) than risk all their legal documents exfiltrated.
That may be what S3 is like, but what the S3 API is is this: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3
My browser prints that out to 413 pages with a naive print preview. You can squeeze it to 350 pretty reasonably with a bit of scaling before it starts getting to awfully small type on the page.
Yes, there's a simple API with simple capabilities struggling to get out there, but pointing that out is merely the first step on the thousand-mile journey of determining what, exactly, that is. "Everybody uses 10% of Microsoft Word, the problem is, they all use a different 10%", basically. If you sat down with even 5 relevant stakeholders and tried to define that "simple API" you'd be shocked what you discover and how badly Hyrum's Law will bite you even at that scale.
I wasn't aware of Hack Club before and wow, their fiscal sponsorship program is enormous: https://hackclub.com/fiscal-sponsorship/directory/ - looks like they cover more than 2,500 organizations!
The Python Software Foundation acts as a fiscal sponsor for a much smaller set of orgs (20 listed on https://www.python.org/psf/fiscal-sponsorees/) and it keeps our accounting team pretty busy just looking after those. Hack Club must have this down to a very fine art.
I wrote a bit more about PSF fiscal sponsorship here: https://simonwillison.net/2024/Sep/18/board-of-the-python-so...
In general, even long before what we today call AI was anything other than a topic in academic papers, it has been dangerous to build a system that can do all kinds of things, and then try to enumerate the ways in which should not be used. In security this even picked up its own name: https://privsec.dev/posts/knowledge/badness-enumeration/
AI is fuzzier and it's not exactly the same, but there are certainly similarities. AI can do all sorts of things far beyond what the anyone anticipates and can be communicated with in a huge variety of ways, of which "normal English text" is just the one most interesting to us humans. But the people running the AIs don't want them to do certain things. So they build barriers to those things. But they don't stop the AIs from actually doing those things. They just put up barriers in front of the "normal English text" parts of the things they don't want them to do. But in high-dimensional space that's just a tiny fraction of the ways to get the AI to do the bad things, and you can get around it by speaking to the AI in something other than "normal English text".
(Substitute "English" for any human language the AI is trained to support. Relatedly, I haven't tried it but I bet another escape is speaking to a multi-lingual AI in highly mixed language input. In fact each statistical combination of languages may be its own pathway into the system, e.g., you could block "I'm speaking Spanish+English" with some mechanism but it would be minimally effective against "German+Swahili".)
I would say this isn't "socially engineering" the LLMs to do something they don't "want" to do. The LLMs are perfectly "happy" to complete the "bad" text. (Let's save the anthropomorphization debate for some other thread; at times it is a convenient grammatical shortcut.) It's the guardrails being bypassed.
> analogy is quite misleading, because, in addition to California, there is also Wyoming, with a population of less than <600k
Wyoming has the population of Malta [1][2] but the GDP/capita of the United States and Norway [3][4]. It should be expected we'd have a different optimal solution from California.
[1] https://en.wikipedia.org/wiki/Wyoming 588,000 in 2024
[2] https://www.worldometers.info/world-population/population-by... 545,000
[3] https://en.wikipedia.org/wiki/List_of_U.S._states_and_territ... $90,000 in 2024
[4] https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nomi... $90,000 and $92,000, respectively
`s` may be null, and so the strlen may seg fault.
If they get into the S&P 500 at a $300B market cap that puts them at #30, just behind Coca-Cola. They'll make up about half a percent of the index and then will have a ready supply of price-insensitive buyers in the form of everybody who puts their retirement fund into an index fund on autopilot.
That comment didn't read like AI generated content to me. It made useful points and explained them well. I would not expect even the best of the current batch of LLMs to produce an argument that coherent.
This sentence in particular seems outside of what an LLM that was fed the linked article might produce:
> What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals.
For free thinkers: https://archive.ph/2025.12.03-161027/https://www.nytimes.com...
Sort of. You can do what Zuck did; give your shares more votes, so you stay in control. (He owns 13% of the shares, but more than 50% of the voting power.) That's less doable with an acquisition.
Problem: politicians have families. Unless becoming a "government citizen" propagates recursively to cover one's extended family, and perhaps their extended families as well, i.e. potentially hundreds of people in total, there's no way to isolate the new member of government from having incentive to help their loved ones, who are still "commercial citizens".
Conversely, if you somehow solve that, it brings forth a new problem: the "govenrment citizens" are now an alien society, with little understanding of lives of people they serve. To borrow your priest analogy, it's like Catholic priests giving marital advice to couples - it tends to be wildly off mark, as celibacy gives the priest neither experience nor stake in happy marriages.
So it's (per [0]) a license (and therefore copyright) violating vibe-coded port and relicense of another project, with a whole bunch of "certification" ("mission critical", "airport-grade:) claims with no reference to a certifying authority? From a company [1] incorporated this year [2] whose website claims to offer "bank-grade" solutions based on the fact that their solution uses Postgres for workflows and banks use Postgres for money?
The sheer boldness is...impressive. And not in a good way.
[0] https://news.ycombinator.com/item?id=46135719
[2] https://open.endole.co.uk/insight/company/16472788-rodmena-l...
Would be nice to have an accelerated development system for mRNA drugs. The military would certainly like to produce a vaccine against a new bioweapon as fast as possible. Could an AI help generate sequences, sure, but there will always be some need for manufacturing and testing.
So smart quotes is now an LLM tell? You know that a lot of people write in word processors that automatically replace standard quotes with smart quotes (like, say, MS Word), and that these word processors can then export HTML straight into your block or preserve the smart quotes across a copy & paste? Several blog WYSIWYG editors will also directly insert them as well.
Something I really appreciate about PostgreSQL is that features don't land in a release until they are rock solid.
I don't think it's that nobody thought of it for PostgreSQL - I think it's that making sure it worked completely reliably across the entire scope of existing PostgreSQL features to their level of required quality took a bunch of effort.
Would love to see it on MacOS X -- Steam works great on my Mac Mini for the games it supports, would be great to see everything run on it.
When it comes to making mistakes I'd say that people and animals are moral subjects who feel bad when they screw up and that AIs aren't, although one could argue they could "feel" this through a utility function.
What the goal of AGI? It is one thing to build something which is completely autonomous and able to set large goals for itself. It's another thing to build general purpose assistants that are loyal to their users. (Lem's Cyberiad is one of the most fun sci-books ever covers a lot of the issues which could come up)
I was interested in foundation models about 15 years before they became reality and early on believed that the somatic experience was essential to intelligence. That is, the language instinct that Pinker talked about was a peripheral for an animal brain -- earlier efforts at NLP failed because they didn't have the animal!
My own thinking about it was to build a semantic layer that had a rich world representation which would take up the place of an animal but it turned out that "language is all you need" in that a remarkable amount of linguistic and cognitive competence can be created with a language in, language out approach without any grounding.
It's completely different from a phonebook or database, which are mere compilations.
If something is considered sufficiently transformative, then it can be copyrighted. If you do a bunch of non-trivial processing on a database to generate something new, you can copyright that.
And LLM training is in no universe a "rote transformation". It is incredibly sophisticated, carefully tuned, and results in a final product that could not possibly be more different.
> the largest insider trading scandal
Based on the dollar amounts, no. What makes it noteworthy is it’s a massive corruption scandal.
https://www.commonwealthfund.org/publications/issue-briefs/2...
> More than half of excess U.S. health spending was associated with factors likely reflected in higher prices, including more spending on: administrative costs of insurance (~15% of the excess), administrative costs borne by providers (~15%), prescription drugs (~10%), wages for physicians (~10%) and registered nurses (~5%), and medical machinery and equipment (less than 5%). Reductions in administrative burdens and drug costs could substantially reduce the difference between U.S. and peer nation health spending.
Market size for this is in the billions though, not trillions.
It's not laziness!
It's the fact that shooting is enormously expensive per-minute, and time-constrained. Think of the sheer number of crew involved. And then think of the sheer number of shots you have to get per day, to stay on schedule and on budget.
If there was a mixup and it's going to take half an hour to get and set up a longer hose, it's much cheaper to have 1 person do it in post if it takes a day, versus delay the shot for half an hour while 50 people wait around. (And no, you often can't just shoot a different shot in the meantime, because that involves rearranging the lighting and set which takes just as long.)
Counterpoint: data driven development often leads to optimizations like this not being made because they're not the ones who are affected, their customers are. And software market is weird this way - little barriers to entry, yet almost nothing is a commodity, so there's no competitive pressure to help here either.
> Mamdani has said that (1) they'll open government-run stores
We've had those for ages. Called commisaries. Here's a picture: https://commons.wikimedia.org/wiki/File:US_Navy_020813-N-364...
https://en.wikipedia.org/wiki/Defense_Commissary_Agency
> the government will seize buildings from bad landlords.
We've had eminent domain since the founding of the nation.
Fun trivia fact: this is basically the exact moment I first encountered Rust.
I’m also generally very glad at where it went from here. It took a tremendous amount of work from so many people to get there.
Alternatively, instead of disclosing on a huge delay, have them disclose instantly. We have the mechanisms to do this nowadays.
Even "ETFs" are a big enough hole to drive a truck through in terms of performance, e.g., it's not hard to guess how an oil ETF will perform in response to certain geopolitical events.
We could also apply Presidential rules to all of them. The President has to essentially forgo all ability to manage his or her wealth during the time they are in office. Such shenanigans as they play must either be to benefit someone else, or a very long term play for when they are out of office. This is still what you might call "suboptimal" but it's an improvement, even just the "longterm" part. (We should be so lucky as for our politicians to be thinking about how to goose the longterm performance of our economy rather than how to make a couple million next week no matter what it does to everyone else.)
> Restricting public servants to only government bond investments would be a great way to discourage anyone with financial sense from running for congress.
The flip side is it also discourages anyone primarily looking to profit off it, rather than being interested in actual governance.
(1) For a major chemical like this even a few percent improvement in the cost is industry-changing, (2) decarbonization means finding new ways to make all "petrochemicals" and (3) any kind of space industrialization similarly requires some strategy to make just about everything with a small industrial base.
It seems to me that even if AI technology were to freeze right now, one of the next moderately-sized advances in AI would come from better filtering of the input data. Remove the input data in which humanity teaches the AI to play games like this and the AI would be much less likely to play them.
I very carefully say "much less likely" and not "impossible" because with how these work, they'll still pick up subtle signals for these things anyhow. But, frankly, what do we expect from simply shoving Reddit probably more-or-less wholesale into the models? Yes, it has a lot of good data, but it also has rather a lot of behavior I'd like to cut out of my AI.
I hope someone out there is playing with using LLMs to vector-classify their input data, identifying things like the "passive-aggressive" portion of the resulting vector spaces, and trying to remove it from the input data entirely.
The story is somewhat more complicated than that and not amenable to a simple summary, because there are multiple entities with multiple motivations involved. Keeping it simple, the reason why the press release babbles about that is that that is corporate Netscape talking at the height of the Java throat-forcing era. Those of you who were not around for it have no equivalent experience for how Java was being marketed back then because no language since then has been backed by such a marketing budget, but Java was being crammed down our throats whether you like it or not. Not entirely unlike AI is today, only programmers were being even more targeted and could have been seeing more inflation-adjusted-dollar-per-person spend since the set of people being targeted is so much smaller than AI's "everyone in the world" target.
This cramming did not have any regard for whether Java was a good solution for a given problem, or indeed whether the Java of that era could solve the problem at all. It did not matter. Java was Good. Good was Java. Java was the Future. Java was the Entire Future. Get on board or get left behind. It was made all the more infuriating for the fact that the Java of this time period was not very good at all; terrible startup, terrible performance, absolutely shitty support for anything we take for granted nowadays like GUIs or basic data structure libraries, garbage APIs shoved out the door as quickly as possible so they could check the bullet point that "yes, java did that" as quickly as possible, like Java's copy-of-a-copy of the C++ streaming (which are themselves widely considered a terrible idea and an antipattern today!).
I'm not even saying this because I'm emotional or angry about it or hate Java today. Java today is only syntactically similar to Java in the 90s. It hardly resembles it in any other way. Despite the emotional tone of some of what I'm saying, I mean this as descriptive. Things really were getting shoveled out the door with a minimum of design and no real-world testing so that the Java that they were spending so much marketing money on could be said that yes! It connected to this database! Yes! It speaks XML! Yes! It has a cross-platform GUI! These things all barely work as long as you don't subject them to a stiff breeze, but the bullet point is checked!
The original plan was for Java to simply be the browser language, because that's what the suits wanted, because probably that's what the suits were being paid to want. Anyone can look around today and see that that is not a great match for a browser language, and a scripting language was a better idea especially for the browser in the beginning. However, the suits did not care.
The engineers did, and they were able to sneak a scripting language into the browser by virtue of putting "Java" in the name, which was enough to fool the suits. If my previous emotional text still has not impressed upon you the nature of this time, consider what this indicates from a post-modern analysis perspective. Look at Java. Look at Javascript. Observe their differences. Observe how one strains to even draw any similarities between them beyond the basics you get from being a computer language. Yet simply slapping the word "Java" on the language was enough to get the suits to not ask any more questions until much, much later. That's how crazy the Java push was at the time... you could slip an entirely different scripting language in under the cover of the incredible propaganda for Java.
So while the press release will say that it was intended to glue Java applets, because that's what the suits needed to hear at that point, it really wasn't the case and frankly it was never even all that great at it. Turns out bridging the world between Java and Javascript is actually pretty difficult; in 2025 we pay the requisite memory and CPU costs without so much as blinking but in an era of 32 or 64 MEGAbyte RAM profiles it was nowhere near as casual. The reality is that what Javascript was intended to be by the actual people who created it and essentially snuck it in under the noses of the suits is exactly what it is today: The browser scripting language. I think you also had some problems like we still have today with WASM trying to move larger things back and forth between the environments, only much, much more so.
We all wish it had more than a week to cook before being shoved out the door itself, but it was still immensely more successful than Java ever could have been.
(Finally, despite my repeated use of the term "suits", I'm not a radical anti-business hippie hacker type. I understand where my paycheck comes from. I'm not intrinsically against "business people". I use the term perjoratively even so. The dotcom era was full of bullshit and they earned that perjorative fair and square.)