What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
Can you still install F-Droid?
Can you still run without a Google account?
Please don't. We'll spend $500,000 tracking down what happened.
In the past I've felt like some of the anti-Altman rhetoric on HN was overkill. It some cases it felt like piling on, and while there was definitely some shady stuff in the past, it seemed like folks were too quick to paste the "evil" banner on anything they disagreed with.
I was wrong, and I no longer think that. I now lump him in with the rest of the narcissistic sociopaths I see with so much power in the country. I'm honestly really curious what past Altman champions like paulg think of him now. I just don't see how this is the slightest bit defensible.
The "We Will Not Be Divided" pledge at https://notdivided.org/ (and discussed at https://news.ycombinator.com/item?id=47188473 ) has 96 OpenAI signatories. Time for these people to show if their signatures actually meant something or were just meaningless theater. It's not like these people would have much of trouble getting jobs given they're AI experts with resumes to back it up. Signing that pledge and then staying at OpenAI after this would just look like rank hypocrisy to me.
Yeah, I'd definitely like to be able to edit my context a lot more. And once you consider that you start seeing things in your head like "select this big chunk of context and ask the model to simply that part", or do things like fix the model trying to ingest too many tokens because it dumped a whole file in that it didn't realize was going to be as large as it was. There's about a half-dozen things like that that are immediately obviously useful.
Yet the humans of the time, a small number of the smartest ones, did it, and on much less training data than we throw at LLMs today.
If LLMs have shown us anything it is that AGI or super-human AI isn't on some line, where you either reach it or don't. It's a much higher dimensional concept. LLMs are still, at their core, language models, the term is no lie. Humans have language models in their brains, too. We even know what happens if they end up disconnected from the rest of the brain because there are some unfortunate people who have experienced that for various reasons. There's a few things that can happen, the most interesting of which is when they emit grammatically-correct sentences with no meaning in them. Like, "My green carpet is eating on the corner."
If we consider LLMs as a hypertrophied langauge model, they are blatently, grotesquely superhuman on that dimension. LLMs are way better at not just emitting grammatically-correct content but content with facts in them, related to other facts.
On the other hand, a human language model doesn't require the entire freaking Internet to be poured through it, multiple times (!), in order to start functioning. It works on multiple orders of magnitude less input.
The "is this AGI" argument is going to continue swirling in circles for the forseeable future because "is this AGI" is not on a line. In some dimensions, current LLMs are astonishingly superhuman. Find me a polyglot who is truly fluent in 20 languages and I'll show you someone who isn't also conversant with PhD-level topics in a dozen fields. And yet at the same time, they are clearly sub-human in that we do hugely more with our input data then they do, and they have certain characteristic holes in their cognition that are stubbornly refusing to go away, and I don't expect they will.
I expect there to be some sort of AI breakthrough at some point that will allow them to both fix some of those cognitive holes, and also, train with vastly less data. No idea what it is, no idea when it will be, but really, is the proposition "LLMs will not be the final manifestation of AI capability for all time" really all that bizarre a claim? I will go out on a limb and say I suspect it's either only one more step the size of "Attention is All You Need", or at most two. It's just hard to know when they'll occur.
You can also get to computable numbers through a similar argument, substituting something Turing-complete for algebra. You definitely do get to learn some interesting things about numbers from computable numbers. The differences between the computables and the full reals are much more subtle than the differences between the rationals and the reals.
I got a gift of a box of 3.5 Floppies about 10+ years ago. Dug up recently, and given each to my daughter and her neighbor friends, “Here is the SAVE icon. Keep it with you.”
I also remembered and completed the meme with a magnet stuck to the fridge.
Interesting, I'm running Sequoia and have never seen that.
However, I'm running Sequoa developer beta. In my system settings under Beta updates, I have "Sequoia developer beta" selected.
At this point it's basically just getting the Sequoa security patches a few days early. But I guess it also suppresses this message?
If you're worried about privacy and security, why did you choose Inngest, which sends all your private data to Inngest? If you want truly private durable execution, you should check out DBOS.
If you want a true lesson on design, check out Ask Tog, starting here:
https://asktog.com/atc/principles-of-interaction-design/
Tog was the original design engineer for the Mac, and arguably one of the first true HCI engineers.
Then read the rest of his website. He goes into where Windows tried to copy Mac and got it horribly wrong.
One of my favorite examples is menu placement. The reason the Mac menus are at the top is because the edges of the screen provide an infinite click target in one direction. So you just go to the top to find what you want. With Windows, the menu was at the top of each Window, making a tiny click target. Then when you maximized the window, the menu was at the top, but with a few pixels of unclickable border. So it looked like the Mac but was infinitely worse.
If you're making a UI, you should read all of Tog's writings.
What is your plan for long term support?
I think Steve was correct in that Windows 95/98/NT/ME/2000 was functional but it wasn't particularly elegant. But the part I think Steve missed was that elegance may get the "ohhs and ahhs" but functionality gets the customers. Back when NeXT was a thing a friend of mine who worked there and I (working at Sun) were having the Workstation UX argument^h^h^h^h^h^h^h^hdiscussion. At the time, one component was how there was always like 4 or 5 ways to do the same thing on Windows, and that was alleged to be "confusing and a waste of resources." And the counter argument was that different people would find the ways that work best for them, and having a combinatorial way of doing things meant that there was a probably a way that worked for more people.
The difference for me was "taste" was the goal, look good or get things done. For me getting things done won every time.
> Korea, Vietnam, Iraq (I and II), Afghanistan, etc. were not technically wars in the sense that there was any form of formal declaration by congress.
(1) A declaration of war is not necessary for a war to legally exist, except in the context of specific US laws that might rely on a declared state of war,
(2) Congress constitutional power to declare war is not dependent on the use of special words; every (conditional or unconditional) “authorization for the use of military force” (including the broad but time limited authorization in the War Powers Act) and similar is an application of the Constitutional power to declare war.
> except for all of the laws that allow you to do these things.
It's even worse than that, because this administration has made it clear they will push as hard as possible to have the law mean whatever they says it means. The quoted agreement literally says "...in any case where law, regulation, or Department policy requires human control" - "Department policy" is obviously whatever Trump says it is ("unitary executive theory" and all that), and there are numerous cases where they have taken existing law and are stretching it to mean whatever they want. And when it comes to AI, any after-the-fact legal challenges are pretty moot when someone has already been killed or, you know, the planet gets destroyed because the AI system decide to go WarGames on us.
The same goes for anybody still working at OpenAI past Monday morning 9 am.
>so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud
LOL
Anthropic isn't preventing them from managing their key technologies. If my software license says 1000 users, and I build into the software that you can only connect with 1000 users, is your argument that the government can no longer manager their tech?
That my software should allow license violations if the government thinks it is necessary?
Nobody has to worry about atrophy. That's the good thing about it: Things only atrophy when you don't need them any more.
He did not learn to find — on the keyboard and tapped out —- instead, bravo!
Sam Altman "donated" a bunch of money to US government officials.
US government officials said "we're thinking of ruining Anthropic if they don't play ball".
Sam Altman publicly said "oh no, don't do that, that's terrible, they're right to not play ball".
Sam Altman signed a deal to play ball after he said that, and it turned out he had been working on this deal even before the US government officials said the thing about ruining Anthropic.
Any user interface designer should take a good look at the controls on a commercial airliner. An awful lot of effort goes into making an intuitive, effective user interface. I have disagreements with it, but there's no denying it's very well done.
Designing a programming language is mostly about usability. I'll be giving a talk about that in April at Yale. It's a fun topic!
Once the windows become actually circles, or maybe some point along that path, they'll go back to square corners and congratulate themselves on how much better and innovative they are. It's just a stupid trend to keep rounding things more and more... I hope.
Too much willingness to disobey unlawful orders from the "woke left" I assume
I can never see the point, though. Performance isn't anywhere near Opus, and even that gets confused following instructions or making tool calls in demanding scenarios. Open weights models are just light years behind.
I really, really want open weights models to be great, but I've been disappointed with them. I don't even run them locally, I try them from providers, but they're never as good as even the current Sonnet.
Easy to celebrate from a few thousand miles away.
I'm not saying the Ayatollah wasn't a vile criminal, but it's always innocents on the ground who face the brunt of war.
I hope the citizens of Iran can have a peaceful transition and chart a better path for their country, but every single one of America's previous forced regime changes in the region (and across the world) has shown otherwise.
> One thing became abundantly clear: most people in the world don’t and have never lived like Europeans.
Looking at marriage norms across the world actually suggests the opposite takeaway. What’s remarkable is how similar marriage norms are among people who had almost no historical contact with each other. Confucian marriage 2,000 years ago wasn’t that different from Christian marriage 100 years ago, despite those two cultures having almost nothing else in common.
When anthropologists identify societies with significantly different marriage norms, it’s always some random tribal society that never grew beyond a relatively small number of people and never developed civilization to speak of.
> MinIO as an S3-compatible object store is already feature-complete. It’s finished software.
I don't see how these two lines can be written together.
The goal is either to remain S3-compatible or to freeze the current interface of the service forever.
As it stands this fork's compatibility with S3, and with the official MinIO itself, will break as soon as one of them pushes an API update. Which works fine for existing users, maybe, but over time as the projects drift further apart no new ones will be able to onboard.
This was has killed a lot of innocent people. Khamenei was not one of the innocent.
Then you can just skip this submission and nobody will be the wiser.
Is the posting a description of a real system, or just imagination? Is there a link to something that makes this real?
A "simple word choice"?? This isn't just about the single word "impose", read the whole post:
> Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment. The emphasized language is the delta between what OpenAI agreed and what Anthropic wanted.
> OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.
So first off, regarding that first paragraph, didn't any of these idiots watch WarGames, or heck, Terminator? This is not just "oh, why are you quoting Hollywood hyperbole" - a hallmark of today's AI is we can't really control it except for some "pretty please we really really mean it be nice" in the system prompt, and even experts in the field have shown how that can fail miserably: https://www.tomshardware.com/tech-industry/artificial-intell...
Second, yes, I am relieved Anthropic wanted to "impose" their morals because, if anything, the current administration has been loud and clear that the law basically means whatever they says it does and will absolutely push it to absurd limits, so I now value "legal limits" as absolutely meaningless - what is needed are hard, non-bullshit statements about red lines, and Anthropic stood by the those, and Altman showed what a weasel he is and acceded to their demands.
You can either leave or stay. If you want to leave, start interviewing. It sounds like you want to stay, so optimize for that. Reduce unnecessary spend, increase savings, and ride it until it ends, either for them or you. Keep the job until you cannot. You can then look for another gig. You can only control what you control, don’t worry about that when you cannot. Optimize within what you can control.
It seems value-neutral to me. It's descriptive. Particularly for anyone who understands that different groups of people will legitimately disagree on many moral questions.
> I don't see why I should quit.
So, can you please draw the line when you will quit?
- If OpenAI deal allows domestic mass surveillance - If OpenAI allows the development of autonomous weapons - OpenAI no longer asks for the same terms for other AI companies
Correct?
If so, then if I take your words at face value:
- By your reading non-domestic mass surveillance is fine
- The development of AI based weapons is fine as long as there is one human element in there, even if it could be disabled and then the weapon would work without humans involved
- The day that OpenAI asks for the same terms for other AI companies and if those terms are not granted then that's also fine, because after all, they did ask.
I have become extremely skeptical when seeing people whose livelihood depends on a particular legal entity come out with precise wording around what does and does not constitute their red line but I find it fascinating nonetheless so if you could humor me and clarify I'd be most obliged.
>I look back each time and I think "man, I was doing that thing when I could have been doing it so much better?". And I feel so hopeful for the future.
The future appears now to be: "Young kids wont have this sense of wonder, or control of the machine, anymore. And a whole lot less will now have a career in IT either".
We have unaligned AIs now. They're called corporations.
“There is one and only one social responsibility of business—to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.” - Milton Friedman, 1970.[1] That article, in the New York Times, established "greed is good, greed works" as a legitimate business principle.
Most of the problems people are worried about with AIs are already real problems with corporations.
[1] https://www.nytimes.com/1970/09/13/archives/a-friedman-doctr...
Don't overlook the media consolidation under Bari Weiss.
In this context they're not the whores, they're the johns. Trump / the PAC would be the whores, but what else is new?
>Nothing persuaded me that he had anything interesting to add - neither rationally, nor aesthetically - about a topic which has been covered by philosophers throughout the millennia.
That sounds more like an emotionally charge reaction than some calm assessment on the merits of the book for what it stands.
Especially when the idea here is that he presents his idiosyncratic vision of the concept of “truth" - not some claim that he solves the problem of truth "which has been covered by philosophers throughout the millennia", and which could very well be inherently unsolvable anyway.
A writer (even more so, an artist with a unique viewpoints) can add lots of very interesting observations and new ways of seeing the concept of truth or our approaches to it, even when they do it "in the small", without taking on or pretending to tackling the philosophical / ontological core issue.
It's even more useful if an author says some things that rub you off the wrong way, or challenge your core tenets. Else, I guess one cal always just resort to some echo bubble friendly comfort reading.
Means writing code (doing) vs writing documentation / plans / project architecture documents and so on.
It’s DoD. It’s still not officially changed. But if you insist on using the nickname it should be DoW.
I consider any claims this fundamentally unreliable because there's too much propaganda value in lying, especially during the opening phases of a war. I also don't consider Khamenei that significant; he's an important theocratic figure obviously but doesn't have the same kind of weight or charisma that his predecessor had.
> I don't think a "not good programmer" can write a Lisp dialect.
You can write a lisp in 145 lines of Python: https://norvig.com/lispy.html
That is precisely the key and you can already see many examples of this in the last 12 months. One group after another is stripped of their rights and mistreated and yet nobody actually does anything other than some protests. I wonder how this sort of thing would go down in France or Germany, for Germany of course the track record is sub-optimal but I would hope that they had at least learned their lessons well enough to avoid a repetition of the blackest chapters in our history.
What puzzles me is how for many years it was predicted that this was going to happen and that in spite of the warnings it still did. I just don't get it.
> Six months later, an architectural change required modifying those features. No one on the team could explain why certain components existed or how they interacted. The engineer who built them stared at her own code like a stranger’s.
Genuine question: so what?
First of all, team members leave all the time, and you're stuck staring at code nobody instantly understands.
Second of all, LLM's are a godsend in help you understand how existing code works. Just give it the files and ask it to explain to you what the components do and how they interact. It'll give you a high-level summary and then you can interactively dig in, far faster than has ever been possible before.
Heck, I often don't remember anything about code I wrote six months ago. It might as well have been written by someone else. And that's not an original observation either -- I remember hearing the same thing from other developers decades ago, as justification for writing better code comments.
Modern codebases are often far too large for any one person or even an entire team to fully comprehend at once. The team has cycled through generations of team members, with nobody who can remember the original rationales for design decisions.
LLM's are helping comprehension more than ever. I don't understand why people aren't talking about this more.
What's missing here is the complex network of alliances that led to WWI. The Iranian regime has alienated virtually everyone, including many of its Muslim neighbors. Nor is the regime part of some overarching international movement, like the communist countries were. Who is going to lift a finger to help Iran?
I'm not supportive of these strikes. Iranians created this government, and if they want to topple it they'll have to be the ones to do it, without foreign intervention.
I assume it is mostly or entirely written by AI, so that tracks.
Fair enough. I have a bit of a trigger finger reaction to anything that hints at suggesting that regular people shouldn't be trusted to use this stuff.
Can that account be upgraded to Workspace just to get the support?
There isn't enough work to justify 10,000 employees. There are diminishing returns.
> If openly bribing a crony gov to cancel your competitor is now the de-facto standard of making business in the US
It very clearly is, the present AI instance is far from the only recent case.
> I don't see how any rational investor could still see US companies as a secure investment.
They evaluate the propensity and ability to profitably engage in open corruption the same as they evaluate other capacities of the company. “Secure” isn't a binary category, and the risk here is much like any other risk.
> When the rule of law degrades into pay-to-play politics, the inevitable result is a mass exodus of both capital and top-tier talent.
That is the expected result of increasing perceived risk. yes, probably one of those “slowly and then all at once” things.
> If they're designated a "supply chain risk", then any company that does any business with the military cannot be a customer.
Wrong.
Companies with military contracts cannot rely on Anthropic-supplied products and services for those contracts. (Yes, the cabinet member who misrepresents his own title and name of his department also publicly misrepresented the legal consequences of the designation. It's almost like ignoring the law and just making things up is a pattern with him and his boss.)
> Are you a tankie or socialist?
FFS man.
As for stalking your account: if you don't want your comment history to be visible then don't participate on HN.
That‘s the notetaking app that has several "editors", isn‘t it?
So that if you want to use feature A you need a different view inside the app than if you want to use feature B. And if you use both, you constantly switch?
>I don't want to insult you but your president is a populist and a TV personality. He is not a policy maker, he is more like an actor.
All of them are, even those that haven't had a show on TV.
Why would a government be interested in "privacy preserving"? Their goal is the exact opposite.
Rather it's business as usual.
> it literally is the AIPAC
AIPAC isn't a person. Who is the person who convinced the President to order these strikes? It could be someone at AIPAC. There is no evidence for that, I suspect, because it's highly unlikely.
The rumor I heard was that high-level Pentagon generals had subtly suggested that Trump target Iran. The reason was to distract his attention from Greenland. Logic goes that if you have a reality TV star who built his brand on being a tough guy in the White House, it's far better that he attack a theocratic dictatorship that funds a host of terrorist organizations and whose country is already on the verge of collapse than a NATO ally and fellow democracy that didn't do anything to us.
Yup. There are good reasons why it's a problem in financial markets but NOT usually a problem in prediction markets:
https://www.economist.com/leaders/2026/02/18/why-insider-tra...
> In prediction markets, informed trading is not a crime or an injustice—it is a valuable service.
A big exception, however, is using prediction markets to make predictions on events regarding publicly traded companies.
In typical jury trials, the jury is instructed that any terms not defined in the relevant statutes are to have their common-sense, ordinary meanings as understood by the jury. The jury is usually also selected to be full of reasonable, moderate people, and folks who are overly pedantic usually get excused during voir dire.
Do you really think a pool of 12 people off the street is going to consider an embedded system, wi-fi router, or traffic light as an "operating system" under this law? Particularly since they don't even have accounts or users as a common-sense member of the public would understand them?
I can generally re-find my place in books, but years ago I acquired a stack of orange punch cards from a university library that they were giving away as scrap paper. These make great bookmarks and also interesting historical conversation pieces if someone notices/recognizes them.
I think the previous use for the punchards to have one for each book and scan them on checkout/checkin (maybe this predated barcodes?)
"He that seeks peace, speak of war" — Walter Benjamin, on the wall next to the entrance of https://en.wikipedia.org/wiki/German_Tank_Museum
> They haven’t had an election since the war started
Because that’s what their constitution says. https://www.wilsoncenter.org/blog-post/ukraines-presidential...
> routinely force unwilling conscripts into vans
Can you clarify what you understand conscription to be?
This is interesting. I took the opposite path. I used to remember page numbers while reading books, just so I could come back without getting lost. That habit started in the School library. These days, I bought a simple, cheap, paintable bookmark, about 100 of them, which my daughters can paint. I have them lying around in books, as I tend to read multiple books at a time. My daughters keep painting them with whatever they want, from anime to their favorite characters, to just about anything. So, bookmarks for me everywhere. Sometimes, I tend to go back a few pages just to recollect the books I was reading a while ago.
Thank you. Can't believe I had to scroll this far down for context.
I recently stumbled upon this delightfully titled book from 1982, "Application development without programmers": https://archive.org/details/applicationdevel00mart
Which includes this excellent line:
> Unfortunately, the winds of change are sometimes irreversible. The continuing drop in cost of computers has now passed the point at which computers have become cheaper than people. The number of programmers available per computer is shrinking so fast that most computers in the future will have to work at least in part without programmers.
One thing has always been constant throughout, though: It's always about the stock market.
My first agent test was pointing it out to my toy compiler repo (L98) and ask to translate the AT&T Assembly files that gave me so much trouble to come up with (my head has Intel syntax burned into it), and translate it back to Intel syntax.
In a couple of seconds I had it back.
Didn't bother commiting the changes, because it works and was a toy compiler anyway.
Unsloth have just released benchmarks on how their dynamic quants perform for Qwen 3.5
Zig would be an interesting contender back in the 1990's between Object Pascal and Modula-2, nowadays we know better.
For me while Go is definitly better than Oberon(-2), and Oberon-07, some of its design decisions are kind of meh, still I will advocate for it in certain contexts, see TinyGo and TamaGo efforts.
As old ML fanboy, you can find such tendencies on plenty of languages not only OCaml. :)
I see Rust as a great way to have made affine types more mainstream, however I rather see the mix of automatic resource management + strong type systmems as a better way forward.
Which is even being acknowledged by Rust's steering group, see Roadmap 2026 proposals.
No mention of Walter Wriston and First National City Bank (later Citicorp)? Wriston is sometimes credited with the concept of networked ATMs, in the sense that he as an executive pushed the project forward.[1] He scaled up the technology, flooding New York City with ATMs. Then everybody else in banking had to install them.
[1] https://www.nytimes.com/2005/01/21/obituaries/walter-b-wrist...
I did some off road travelling in Croatia about 15 years ago, thanks GPS driving us into some farming roads.
Only when I got out of it, I realised how stupid idea that was to keep following the GPS, on some country side villages the markings of the war were still visible, with abandoned buildings full of bullet holes.
Naturally having mines still around was a possibility that I completly forgot about.
> It defines operating system in the law.
No, it doesn't.
It defines the following terms: "account holder", "age bracket data", "application", "child", "covered application store", "developer", "operating system provider", "signal", and "user".
> This wouldn’t apply to embedded systems and WiFi routers and traffic lights and all those things. It applies to operating systems that work with associated app stores on general purpose computers or mobile phones or game consoles.
Presumably, this based on reading the language that in the definition of "operating system developer", and then for some reason adding in "game consoles" (the actual language in both of those includes "a computer, mobile device, or any other general purpose computing [device".
(I've also rarely seen such a poorly-crafted set of definitions; the definitions in the law are in several places logically inconsistent with the provisions in which they are applied, and in other places circular on their own or by way of mutual reference to other terms defined in the law, such that you cannot actually identify what the definitions include without first starting with knowledge of what they include.)
But the Dow is over 50,000!
That is, the money doesn't care so long as it's still profitable. When the recession comes a Democrat will be allowed back in to fix things.
See Liz Truss.
There's no land campaign. It's an isolated series of strikes for PR reasons and wishful thinking about Iran collapse.
“In the future” is not “now”.
Neither the current administration nor Israel are particularly popular with the US public today, and those are correlated in that Israel has particularly lost support from Democrats and Independents in the US, suggesting that a change in power (legislative or executive, and especially both) in the US government could very easily spell much less favorable US policy toward Israel.
Getting publicly kicked to the curb by the Trump admin mere hours before it starts another war is probably the best thing that could have happened to Anthropic. Not sure how well OpenAI's parachuting in is gonna look with hindsight. I have a feeling we won't have to wait that long to find out.
Ban all AI use from government, so maybe we can actually have real humans in charge again.
Because it's fun. Life is meant to be enjoyed.
Those who worry about an imaginary risk and live their lives in constant fear have turned into nothing more than machines enslaved by propaganda.
France still has WWI unexploded ordnance, and keep-out areas are still being de-mined. This has been going on for a century now. About 900 tons of explosives are removed each year. Completion in 700 years at the current rate.[1]
[1] https://www.warhistoryonline.com/world-war-i/the-red-zone-la...
Related post: https://nesbitt.io/2026/02/27/xkcd-2347.html
1-800-Come-on-now
DoW: WOKE Antropic tried to impose their 'values' on us? Friendship ended!! National security risk!
OpenAI: We just signed a deal that's strong on values, the exact same ones as Anthropic, no way we would mislead anyone about this
You: Seems legit
You really think someone would do that, just go on the internet and tell lies?
https://knowyourmeme.com/memes/just-go-on-the-internet-and-t...
This seems squarely within the purpose of the Defense Production Act: https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950
"Title I authorizes the President to identify specific goods as 'critical and strategic' and to require private businesses to accept and prioritize contracts for these materials."
If you invented a new kind of power source, and the government determined that it could be used to efficiently kill enemies, the government could force you to provide the product to them under the DPA. Why should AI companies get an exemption to that?
> They can also classify it as restricted data -- like nuclear weapons technology.
Nuclear weapons technology is restricted under very specific legislative authority, where is the corresponding authority that could be selectively applied to a particular vendors AI models or services?
There's an audio clip in the article. Made me laugh out loud.
“If a system is maintained over an extended period and has observed behavioral traits that are consistent within that period, that is, in itself, strong evidence that those behavioral traits are consistent with the purpose for which the system is permitted to exist” is kind of a mouthful, though, and there is value in succinctness.
(Although there is another message, there, too: “the purpose of a system, insofar as it can be said to exist separate from what it actually does, has no weight in justifying the system’s existence or design”.)
I'd admire them if they took a principled or moral stance on AI. As it stands, they're saying "we don't want fully autonomous weapons because they might kill too many Americans by accident while trying to kill non-Americans" and "we don't want AI to surveil Americans, but anyone else, sure".
> FISA warrants were even more incredible, with well below 1% rejection rates.
That's potentially much less incredible, and in any case not directly comparable, because its the final, not on-first-submission, rate, and also doesn't count applications withdrawn after a preliminary rejection that allows modificaitons but before a final ruling. It only counts the share of those that get a final ruling where that is an approval.
And she'll continue to think the way she did before. But at least she shuts up so you've achieved that much.