What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
"The air taxi can continue flying with up to two motors out" says the article.
Probably safer than a V-22 Osprey.
You could do a whole thesis on how industrialization and the invention of bureaucracy are efforts to get reproducible results out of fallible humans.
We don't yet have the luxury of several thousand years of work trying to get LLMs to be less fallible.
Just curious - what's the point of linking to an older post which doesn't have any comments?
It's not in the top 10, but it's of the more well-known and widely recommended book in the software industry. I'd put it in the same bucket as "Clean Code" and maybe even "Domain Driven Design"; they're kinda from the same "thought school" in the software industry. So it's definitely over-represented in training data (I'd guess primarily in the form of articles and blog posts and educational material reiterating or rephrasing ideas from the book).
FWIW, I found the concept of "seams" from that book useful back when working on some legacy C++ monolithic code few years back, as TDD is a little more tricky than usual due to peculiarities of the language (and in particular its build model), and there it actually makes sense to know of different kind of "seams" and what they should vs. shouldn't be used for.
When I was at CERN during the early 2000's, the use of LaTeX was already slowing down. On my ATLAS TDAQ/HLT section, most folks were using one of the required Word templates, or FrameMaker, only a few hardliners were still going with LaTeX.
The only password manager that IT allows on their hardware, bought by your employer.
A side effect of Electron crap, before Zed many editors and IDEs on Atari, Amiga, Windows, OS/2, BeOS, Mac OS, NeXTSTEP, were written in fully native code.
Ehang had a scaled-up multi-rotor drone that could carry one person. They're a drone company. Worked, but max flight time was something like 17 minutes. Their new model has both lift props and wings, plus a pusher prop for horizontal thrust. Range about 200km.
Joby is more like an Osprey. It takes off and lands hanging from its props, then tilts the props horizontally to operate in airplane mode. This potentially offers more range with less power consumption. They've tried running on hydrogen, and claimed 524 miles of range.
There's also Archer Aviation (https://www.archer.com/) which has a roughly similar vehicle. Test flights since 2021. Was supposed to be in service in 2025. Didn't happen. They supposedly have an air taxi contract for the 2028 Olympics in LA. Owned, or at least heavily financed, by Stellantis.
There seems to be convergence on something that transitions to airplane mode, as opposed to the previous round of giant quadrotor-type drones.
It's now clear that this can be done, but not clear that there's a business in it.
Unless they still have an unexpired patent on the design, it's completely legal to clone. Physical objects simply do not have the same type of copyright protection, and there is considerable precedent in making compatible components --- the most notable example being the automotive aftermarket.
> Why use someone's project when you can just have the robot write your own?
I've been thinking about this a bunch recently, and I've realized that the thing I value most in software now isn't robust tests or thorough documentation - an LLM can spit those out in a few minutes. It's usage. I want to use software which other people have used before me. I want them to have encountered the bugs and sharp edges and sanded them down.
Full book content and model generations are not included because the books are copyrighted and the generations contain large portions of verbatim text.
There are plenty of old books in the public domain already... but I'm not sure what exactly this exercise is supposed to show, since the Kolmogorov limit still stands in the way of "infinite compression".
I think it's extraordinarily telling that people are capable of being reflexively pessimistic in response to the goblin plague. It's like something Zitron would do.
This story is wonderful.
Capex becomes opex if the enemy is shooting your drones down or if you're using disposable drones to deliver fatal payloads.
Volcanic eruption, most likely.
This is a site for intellectual curiosity, not pedantic dissmisal.
> Nit. Isn't it a real honeypot, not a fake one?
The lack of even taking your payment details makes it look either fake, as in still being built or built as a demo, or not being a serious operation.
> It’s about 14 miles to go from jfk to manhattan. A train could do this in 20 minutes or so
I used to live on 30th & Madison. Blade was about 30 minutes door to door. LIRR was 50 to 55 minutes. Car 45 to 120 minutes. Helipads are cheaper to build and site than train stations; for most people, eVTOL will almost always be faster than the train. (I mostly take the train.)
> Instead of supporting people we solve problems for the 0.001% who will give us a quick buck
Blade cost $200 a trip. Assuming that's only affordable for someone making $50k a year or more, that covers the top 80% of Manhattan, 30% of New York City and America and about 5% of the world.
I'm not arguing we don't need better rail (and ferry) connectivity between our airports and urban cores. But you're always going to have a need for time-efficient travel options. And eVTOL has significant applications outside luxury transport. This complaint lands like someone complaining that the original Tesla Roadster was "inefficient and painful" as it was only affordable to the rich.
The same logic applies to comments. No comments are better than wrong comments.
In addition to the middlebox problem, most (not all, but most) of the things that send information over the Internet that aren't HTTP-shaped (incl. HTTP/2 and HTTP/3) are worse than the best HTTP-shaped things. This makes sense: HTTP-shaped things are where all the energy is directed.
> The article doesn’t really tell us what is gained by rejecting infinity.
Decidability. The issues around undecidability all involve the lack of an upper bound. In a finite deterministic space, everything is decidable, although some things may be too costly computationally to decide.
There are several ways to go for decidability. The brute force way is computer arithmetic - there is no number larger than 2^64-1. That's how we get things done on computers, but proofs about numbers with finite upper bounds need lots of special cases. Mathematicians hate that.
I used to work on this sort of thing, using Boyer-Moore theory. That's a lot like the Peano axioms. There is (ZERO), and (ADD1 (ZERO)), and (ADD1 (ADD1 (ZERO))), etc. Everything is constructive and has an unambiguous representation in a LISP-like form. You can have recursive functions. But they must be proven to terminate, by having a nonnegative value which decreases on each recursive call. There is a distinction between "infinite" and "arbitrarily large". You can talk about arbitrarily large numbers, but you cannot get to 1/2 + 1/4 + 1/8 ... = 1. You can have integers and rational numbers of arbitrary size, but not reals.
Set theory was interesting. Rather than axiomatic set theory, I was using lists as sets, with the constraints that no value could be duplicated and the list must be ordered. Equality is strict - two things are equal only if the elements are all equal, compared element by element. It's possible to prove the usual axioms of set theory via this route. The ordered criterion requires proving things about ordered list insertion to get there. It's ugly and needs machine proofs.
I was doing this back in the early 1980s, when machine proofs were frowned upon. Mathematicians were still upset about the four-color theorem proof. It's all case analysis, with thousands of cases. That's more acceptable today.
Looked at in this light, infinity is a labor-saving device to eliminate special cases, at a potential cost in soundness.
Odd time for Claude to go down since it's not peak work hours.
Unfortunately another comment thread here says that it doesn't.
Device reputation on HN would be a pretty funny thing for them to attempt.
Very few of the pre-LLM-era applications, even restricting the set down to the ones in common actual business use, were truly beautiful or unique. There was an era in which most applications were really just MS Access databases; another, long era in which they were literally Excel spreadsheets.
Its a much bigger problem on things like Amazon. My expectation is that Amazon would come under the provisions of this law if the buyer was in Maryland. One the most annoying things about Amazon is looking at different prices using a browser with no history and a VPN putting you in a different zip code, than the same product on your browser where they can see where you are coming from and know who you are.
>Aren't you forgetting the part that says "solely: (a) to perform its obligations set forth in the Terms, including its Support obligations as applicable; (b) to derive and generate Telemetry (see Section 4.4); and (c) as necessary to comply with applicable Laws
None of the above I like, and (a) is so vague as to be useless, including if you read the obligations.
>Except as required by applicable Laws, Zed will not provide Customer Data to any person or entity other than Customer’s designees (including pursuant to Section 7) or service providers."
Companies still do it all the time despite "applicable laws". And when the company is sold, all bets are off.
I'd rather they don't get, or keep, any to begin with.
Because they're elegant. Haskell is a conceptual and syntax mess.
From less than a day ago -
Germany Overtakes US in Ammunition Production Capacity
141 points, 163 comments
I've seen this before in London too in some venues. They have full-on computers that scan your passport and take your photo, for the express purpose of storing this info.
> they have to keep the aircraft carriers hundreds of miles off shore
Drones + cheap antidrones + aircraft carriers + stealth aircraft looks like a solid high/low optimum. Anyone pitching an only-high or only-low strategy is leaving chips on the table.
Yes, exactly. A refund is giving back the money they took from him, compensation is something to make up for the aggravation.
Right. There are plenty of cheap plastic stethoscopes on Alibaba. There are even metal ones in the $2 range. If you want to bang out simple parts in quantity, 3D printing is not the way to go.
I would love to have a Japan-style universal lunch program. But this point is an empty appeal to emotion. Kids are being fed. The U.S. spends $100 billion a year on SNAP and $18 billion a year on the National School Lunch Program. We just focus most of the money on cash benefits to parents of children rather than feeding kids at school.
If the business has a physical presence somewhere, it's not hard. In California, you can get an order to the Sheriff for a "till tap" or an "8 hour keeper". A till tap means a sheriff's deputy or two show up and take the money out of the cash register. A "keeper" means they stand next to the cashier all day and take in money as customers pay. There are fees for this, a few hundred dollars, and they're added to the judgement, so the creditor doesn't end up paying.
The keeper can accept cash and checks, but not credit or debit cards.[1] So, while the keeper is present, the business cannot accept card payments. This disrupts most businesses so badly that they desperately scramble to come up with cash to pay their debt.[2] It gets the message across to management very effectively.
I've done this once. I got paid in full.
[1] https://sfsheriff.com/services/civil-processes/levies/carry-...
[2] https://www.grundonlaw.com/the-power-of-till-taps-debt-colle...
Musk could probably do it for $3 billion.
We spend drastically more money than this on education; it isn't even in the same ballpark. People get tripped up about this because the funding comes from different taxing bodies (most education funding is state and local) --- but all taxation is linked.
We also couldn't fully fund free school meals for this sum, this sum is an ambit claim by the administration not a budget, and a large component of this funding request is for capital expenditures, not ongoing operational expenditure. The (larger) school meal funding dollars would have to be paid regularly.
Have you looked at the results for any commercial query, something like [sofa beds] or [hard drives]? It is basically 100% ads. Anything where the user is intending to spend money, they show only ads, and have all the top producers in the world bid against each other for who gets featured, and Google captures essentially all surplus value in the transaction.
My wife is an investor, and one of her portfolio areas is pharmaceuticals. A couple of portfolio companies have reported that it's becoming basically impossible to make any money off of a new product, because you need to advertise it to reach the customer, and Google will skim all the excess producer surplus off as you compete with other startups serving the same market.
It's basically the perfect business model. They own the path to the consumer, which means they own the economy.
I'd also recently hired someone out of Google Search, and they said that the only queries that "legacy" (non-AI-mode) search cares about are commercial-seeking queries, and the only metric they optimize for is ad conversions on those. It literally is thousands of people whose only job is to get you to click more ads.
FCGI is also an orchestration system. It launches more server tasks when the load goes up, shuts them down when the load decreases, and launches new copies of tasks if they crash. It's like single-system Kubernetes.
I asked how to get a partial refund (it blew through my quota in a single question) and Claude sent me to Github.
Yes, I hate to be a grammar nazi online but I believe the correct tense is "Ramp's security team indicated that the issue wioll haven be resolved on May 16, 2026." per Dr. Dan Streetmentioner’s Time Traveler’s Handbook of 1001 Tense Formations.
> "People who don't use AI will be left behind", they say. I can't emphasize enough how much I hate it when I hear/read shit like that because I'm pretty sure, in fact, that what will happen is the exact opposite.
> [...] they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to learn stuff.
I love learning. My life of self-education is so much richer with LLMs to help me.
There are dozens of other arguments for not engaging with AI. If your reason is "I love learning" I recommend at least dipping your toes in before you declare that AI is a hindrance, not a help, to people who love to learn new things.
> it was undoubtedly left-wing
What if it's just… right?
I have no idea about this page, but Theori/Xint has a staff of veterans, they are a serious thing.
I'm not a fan of online age verification, but this is completely absurd:
> Every website. Every platform. Every app. Every service. Your children will never know what it was like to think freely online. They will never explore ideas anonymously. They will never question authority without it being logged in their permanent profile. They will never speak freely without fear that every word will be used...
No. Nobody's proposing you need to verify your identity to read articles on the New York Times or Wikipedia or political blogs. And nobody is proposing you need to verify your identity to leave comments on a news article or blog post. And any proposed law around that would run into massive first-amendment constitutional hurdles. It would be struck down easily.
There's always going to be a spectrum of websites that range from open and anonymous (like news and political discussion) to strongly identity-verified (like online banking). I don't like online age verification for particular sites, but at the same time I think it's completely misleading to see it as this slippery slope to a world where anonymous speech no longer exists.
We can have reasoned arguments around how people's usage of sites is tracked and how to prevent that, without making this about free speech and "the hill to die on".
Sadly, this article doesn't explain how this "surveillance pricing" (which is just a scarier-sounding synonym for "dynamic pricing") would even work in a physical grocery store.
Like, prices are displayed on the shelf for everyone to see. And they have to match what you pay at checkout.
So how the heck would a grocery store even do this? And this law is specifically around grocery stores.
Like, there was a big kerfuffle a while ago about how Wendy's was going to engage in dynamic pricing so that a burger would be cheaper during the slow period at e.g. 3-4 pm, compared to the lunch rush. But that wasn't personalized. And the outcry was so strong they never did it, no law needed.
Also, this law excludes loyalty programs and promotional offers, which seems to be the main way that groceries have engaged in dynamic pricing in reality -- the advertised price doesn't change, but they give certain people certain coupons. And of course, my parents were clipping coupons from newspapers decades ago, as richer people couldn't be bothered, whereas people trying to make ends meet was clip and save religiously.
doing a selfie with the webcam
First, that's easily enough to identify you from biometric data, and it's naive to assume it won't be resold. Second, I kept getting asked for ID into my 40s because I looked young. People don't all age in the same way, so this system will fail for people at the tails of a normal distribution - some 15 year olds will easily pass for 25 and vice versa.
As I understand it FastCGI doesn't handle websockets, which is a shame. It should be able to handle SSE though since that's effectively just a regular slow-loading/streaming HTTP response.
"Claude" is a big program that wraps a coding agent around a specific model. It would be the specific model that "stands up to you". I post this pedantry only because it may be helpful to you to realize this for other reasons.
This is quite an interesting article for its omissions.
I remember the great FastCGI vs. SCGI vs. HTTP wars: I was founding a Web2.0 startup right at the time these technologies were gaining adoption, and so was responsible for setting up the frontend stack. HTTP won because of simplicity: instead of needing to introduce another protocol into your stack, you can just use HTTP, which you already needed to handle at the gateway. Now all sorts of complex network topologies became trivial: you could introduce multiple levels of reverse proxies if you ran out of capacity; you could have servers that specialized in authentication or session management or SSL termination or DDoS filtering or all the other cross-cutting concerns without them needing to know their position in the request chain; and you could use the same application servers for development, with a direct HTTP connection, as you did in production, where they'd sit behind a reverse proxy that handled SSL and authentication and abuse detection.
It also helped that nginx was lots faster than most FastCGI/SCGI modules of the time, and more robust. I'd initially setup my startup's stack as HTTP -> Lighttpd -> FastCGI -> Django, but it was way slower than just using nginx.
The use of HTTP was basically the web equivalent of the End-to-End Principle [1] for TCP/IP. It's the idea that the network and its protocols should be agnostic to what's being transmitted, and all application logic should be in nodes of the network that filter and redirect packets accordingly. This has been a very powerful principle and shouldn't be discarded lightly.
The observation the article makes is that for security, it's often better to follow the Principle of Least Privilege [2] rather than blindly passing information along. Allowlist your communications to only what you expect, so that you aren't unwittingly contributing to a compromise elsewhere in the network.
And the article is highlighting - not explicitly, but it's there - the tension between these two principles. E2E gives you flexibility, but with flexibility comes the potential for someone to use that flexibility to cause harm. PoLP gives you security, but at the cost of inflexibility, where your system can only do what you designed it to do and cannot easily adapt to new requirements.
[1] https://en.wikipedia.org/wiki/End-to-end_principle
[2] https://en.wikipedia.org/wiki/Principle_of_least_privilege
Hence why even on UNIX people moved on from NFS, but on Linux it keeps being the remote filesystem many reach for.
I don't think subagents are representative of anything particularly interesting on the "agents can run themselves" front.
They're tool calls. Claude Code provides a tool that lets the model say effectively:
run_in_subagent("Figure out where JWTs are created and report back")
The current frontier models are all capable of "prompting themselves" in this way, but it's really just a parlor trick to help avoid burning more tokens in the top context window.It's a really useful parlor trick, but I don't think it tells us anything profound.
It's funny that 128B is now considered Medium. I remember back in the day when 355M parameters was considered medium with GPT-2.
Unfortunately, it's not the Rust stdlib, it's nearly every stdlib, if not every one. I remember being disappointed when Go came out that it didn't base the os module on openat and friends, and that was how many years ago now? I wasn't really surprised, the *at functions aren't what people expect and probably people would have been screaming about "how weird" the file APIs were in this hypothetical Go continually up to this very day... but it's still the right thing to do. Almost every language makes it very hard to do the right thing with the wrong this so readily available.
I'm hedging on the "almost" only because there are so many languages made by so many developers and if you're building a language in the 2020s it is probably because you've got some sort of strong opinion, so maybe there's one out there that defaults to *at-style file handling in the standard library because some language developer has the strong opinions about this I do. But I don't know of one.
I can't figure out if this is available in the official Mistral API or not.
Their model listing API returns this:
{
"id": "mistral-medium-2508",
"object": "model",
"created": 1777479384,
"owned_by": "mistralai",
"capabilities": {
"completion_chat": true,
"function_calling": true,
"reasoning": false,
"completion_fim": false,
"fine_tuning": true,
"vision": true,
"ocr": false,
"classification": false,
"moderation": false,
"audio": false,
"audio_transcription": false,
"audio_transcription_realtime": false,
"audio_speech": false
},
"name": "mistral-medium-2508",
"description": "Update on Mistral Medium 3 with improved capabilities.",
"max_context_length": 131072,
"aliases": [
"mistral-medium-latest",
"mistral-medium",
"mistral-vibe-cli-with-tools"
],
"deprecation": null,
"deprecation_replacement_model": null,
"default_model_temperature": 0.3,
"type": "base"
},
So that has the alias "mistral-medium-latest", but the official ID is "mistral-medium-2508" which suggests it's the model they released in August 2025.But... that 1777479384 timestamp decodes to Wednesday, April 29, 2026 at 04:16:24 PM UTC
So is that the new Mistral Medium?
>Infrasound exposure is linked to aversive responding, negative appraisal, and elevated salivary cortisol in humans
https://www.frontiersin.org/journals/behavioral-neuroscience...
I have never heard of "Heidy Khlaaf, chief AI scientist at the AI Now Institute", but the sentiment in this article is diametrically opposite that of the vulnerability research scene.
There is contention among vulnerability researchers about the impact of Mythos! But it's not "are frontier models going to shake up vulnerability research and let loose a deluge of critical vulnerabilities" --- software security people overwhelmingly believe that to be true. Rather, it's whether Mythos is truly a step change from 4.7 and 5.5.
For vulnerability researchers, the big "news" wasn't Mythos, but rather Carlini's talk from Unprompted, where he got on stage and showed his dumb-seeming "find me zero days" prompt, which actually worked.
The big question for vulnerability people now isn't "AI or no AI"; it's "running directly off the model, or building fun and interesting harnesses".
Later
I spoke with someone who has been professionally acquainted with Khlaaf. Khlaaf is a serious researcher, but not a software security researcher; it's not their field. I think what's happening here is that the BBC doesn't know the difference between AI safety prognosis and software security prognosis, or who to talk to for each topic.
If you cherry pick complicated commands, and remove all context, sure, they look cryptic.
I wrote that tutorial, and literally only one of those is relevant to my day to day work: jj new o, which means “make a new change on top of the change named o”. Yes, if you remove the context that “o” is on your screen and highlighted, it looks complex.
It’s the same with the other “jj new” command: you’re producing a merge by giving it every branch you want to merge together. If you’re merging five branches into one, you need to provide five identifiers for those branches. It could not be simpler than this. And -m adds a message, same as git.
The other two are showing off the power of the revset language; you’re not typing this stuff in yourself more than once, and if you are, you use an alias so that it’s shorter and easier to use.
Advertising prescription medication is indeed illegal in the UK. https://www.gov.uk/guidance/advertise-your-medicines
OTC is ok.
There are two reasons why this isn't true.
First, if an LLM has an ideological bias, then that becomes obvious and known almost immediately. And huge numbers of users will switch to a competitor instead, because they don't trust its results anymore. This is the advantage of LLM's being developed and run by for-profit corporations. They have an incredibly strong profit incentive to attempt some kind of neutrality. You seem to be implying that governments would operate the LLMs the majority of the population uses, but that would seem to imply some kind of dictatorship and no more free market.
Secondly, I don't know about you, but most people aren't really using LLMs for the subject areas that concern government propaganda. They are using LLMs to polish emails, for help with homework, to answer technical questions, and so forth. Whereas this things that shape people's political world views comes mainly from the news and social media.
You seem to be envisioning some kind of a world where people don't access the news or social media directly, but it is somehow passed through some kind of LLM transformation filter. I'm not sure why people would sign up for anything like that. If I see a link to a New York Times story, I want to read the story directly. I don't want an LLM to rewrite it for me. And I don't know anybody else who wants that either. Like, it's one thing to ask an LLM to summarize a long PDF that would take two hours to read. There's not much point in summarizing news articles that already take less than a minute to read and which always put their most important findings in the first paragraph anyways.
I'd missed that whole thing. Useful context: https://lwn.net/Articles/1014603/
> is free for the first three years then is tied to an active OnStar subscription.
Enshitification is a real thing, unfortunately.
> As place to run test? Build your own infrastructure. It's easier than ever. Why rely on blackboxes to do that?
I'm not saying this is horrible advice, but I think it conveniently ignores some major reasons people prefer cloud infrastructure in the first place.
Building your own infrastructure is the (relatively) easy part. Maintaining it, ensuring everything is patched, passing compliance audits, dealing with your own outages (I find it a bit ironic when everyone complains about cloud downtime, as if self hosted infrastructure has 99.999% uptime) is the expensive part. I'm not saying it's that hard to do, but once you get to a certain size it requires dedicated staff to manage, which is expensive.
In fact, if GitHub Actions were more reliable, I would hardly see any reason at all to host your own test infrastructure for most companies. The only reason hosting your own is more attractive is because GH Actions has such poor uptime.
In reality, there would probably be almost as much archaic weird cruft and more new ill considered cruft snuck in through each of the many annual bills that would mostly serve to readopt large sections of the prior law verbatim which would end up being must-lass legislation being considered on short deadlines.
Isn't that the same service which failed and had a production db deleted (with backups too) in a HN story just 2-3 days ago?
Apparently yes: https://news.ycombinator.com/item?id=47911524
> technically they are supposed to disclose this fact
Under what law?
If it can be probability calibrated while still following instructions I will be impressed.
>But the author just took pictures of food & expected a realistic response? Is this genuinely what amounts to a study in AI?
If there are commercial services where you take pictures of food and are promised a realistic (paid for) response, then yes. And there are.
> It's not that ASML is using some otherwise unknown laws of physics nor is any single step or component particularly special or novel. It's just that they meticulously optimized each step, and the sum of such steps is the winning solution.
Previously in the context of Apple I likened this to becoming a chess grandmaster: all you have to do is make the optimal decision every time you make a move, over and over again, for years
People don't like hearing that there isn't One Weird Trick which you can just copy, but it's the reality of these situations. To the extent that they can be analyzed, the best people to send are often anthropologists to look at the decision making culture. Culture is even harder to copy; this was a factor in the difficulties of TSMC Arizona starting up, despite it being literally the same company it's not the same people.
And before that, specific BBSs, for those that could afford the dial ups to them.
For those people above a certain age, no, this is not OS/2 Warp.
All six users will be disappointed.
For me it was Mercurial, but yeah, being done by Linus and adopted in the Linux kernel was the killer feature for Git's adoption.
If Git was created by a random dude, it would never taken off.
> They are not magic oracles.
Anthropic's trillion dollar valuation hinges on the idea that it is just that, a magic oracle that can replace any worker for any type of task. Any programmer, any author, any musician, any kind of clerical work. All we've asked here is "sudo evaluate me a sandwich", the sort of estimation task that humans with internet resources might reasonably be expected to do, and it's given up?
(It would be fun to compare this to sending the picture out on Mechanical Turk and asking humans to eyeball the calorie count of said sandwich...)
Well, have you actually read the license for the auto complete function?
Example,
https://marketplace.visualstudio.com/items/VisualStudioExptT...
Metafilter charged $5. Of course, putting a barrier to entry is also more likely to make a site die.
Perhaps this is unavoidable. In the end maybe somewhere has to be slightly "underground" to be good, lest the bots trampling the surface like the opening scenes of Terminator find you.
> It's easier than ever to publish a video game. Steam is bigger than ever.
In this case: these statements aren't contradictory, they're complementary. It's easy to publish a game on Steam, where the audience are and the money is. It's also easy to publish on itch.io where no money is.
But they didn't have technology yet to do it properly, so it was trivial for people to sever the tie and install alternative OSes - trivial enough that it was also easy to teach others how to do it.
Now, the tech to make that tie near-unbreakable exists.
No, he's right; there's a continuous line of accepting worse and worse that runs through Guantanamo Bay. After all, if you can detain one person extra-legally in a special prison constructed to be immune from human rights, why not a million?
> escorted by PLAN ships
This would be a blunder by Beijing. It would involve trotting their ships through half a world of American and allied sensors, only to put an untested-in-blue-waters navy perilously far from nearest bases or support if anything goes wrong.
I’m not saying the likes of Xi, Putin or Trump couldn’t do it. But it would be an intelligence bonanza for the West, India, Japan and Taiwan.
Nobody seems to be using the word "bankrupt", but I'm getting the impression that's what happened here? Sudden un-announced sale?
It's quite possible that SEO-wise the site does not make the cut into top x Google results but still is findable and considered by ChatGPT when it does its searches.
Especially in a longer ChatGPT conversation or via deep-research or more agentic modes (e.g. "Pro").
ChatGPT spends quite some time and diligence on searching.
Great for content that is not hyper search engine optimized but still (or even more) relevant. It bubbles up.
>The question of ownership is interesting. If I buy a chair, it doesn't make a very good table, does that mean I don't own it?
A better comparison is buying a chair where the seller gets to aprove who sits and when.
This model traces back to Lisp and how BASIC was originally designed at Dartmouth (the pure interpreter approach was a solution to fit it into 8 bit home computers).
The best tooling approach is a mix of interpreter, dynamic and ahead of time compilers, it is a pity that not all toolchains provide this.
Does that mean withdrawing US forces from Germany and elsewhere?
Does that mean ceasing to use Europe as a strike base for the middle east?
> Filing isn't the gate, registration is.
Not really. Copyright registration is pretty much automatic. The Copyright Office does not check for duplicates. Patent registration involves actual examination for patentability. Issued patents are presumed valid (less so than they used to be), but issued copyrights are not. You have to litigate.
The US does not have "sweat of the brow" copyrights. It's the "spark" that creates the originality, not the work. Which is why you can't copyright a telephone directory (Feist vs. Rural Telephone) or a copy of an uncopyrighted image (Bridgeman vs. Corel) or a scan of a 3D object (Meshwerks vs. Toyota). Or the contents of a database as a collective work. Note that some EU countries do allow database copyright.
Interestingly, a corporation can be an author for copyright purposes. The movie industry pushed for that. We may in time see AI corporate personhood for IP purposes.
> Have the model generate keywords from the query, then inject guidance from matching advertisers into the context window
This already exists and is called... "skills".
It doesn't have the usual giveaways of LLM text (except for the rather prominent dashes) but definitely has a similar verboseness and repetitiveness. Human writing can be like that too, if its author wanted to pad it out to a word quota.
They are getting access to a supercomputer soon, and will be scanning all their archive for licensing information (using scancode and ort), security information, and other metadata.
> we should root for an open choice for the users
I see what you did there... and agree completely. If you don't have root, it's not yours. All my Androids (none from this decade) are rooted and I plan to keep them that way.
Cable TV was once ad free. So was Netflix. Companies just can’t help themselves.
The rise in gambling isn’t caused by “desperation.” It’s caused by the loosening of social taboos against gambling. Gambling isn’t common in say Bangladesh even though conditions there are much more desperate than anywhere in the U.S.
Just noticed this notice added at the top of the Blender announcement of their funding from Anthropic: https://www.blender.org/press/anthropic-joins-the-blender-de...
> Notice: This announcement is causing a lot of feedback. We are actively evaluating it.
Presumably a lot of Blender users work in roles that feel threatened by AI being used for computer graphics work.
Lots of negative replies on Blursky here: https://bsky.app/profile/blender.org/post/3mkkuyq3ijs2q
It should be possible to use the VRAM as extra swap space, when you're not using it for AI or gaming or anything else. 32GB is already more than a lot of computers have as just regular RAM, even sufficient to hold an OS installation:
https://www.tomshardware.com/news/lightweight-windows-11-run...
>How many Ukrainians can find Iowa or Missouri on the map?
Their country doesn't make decisions about American on their behalf (or even at all), so they don't have a moral obligation as citizens to. And Iowa and Missouri are mere states, and not even very interesting ones at that.
Exactly. I chose to abuse my platform to promote Teresa T as the name of a whale.
"Main character energy". What they're really doing is protecting their view of themselves as smart, and they're making a contribution for the sake of trying to perform being an OSS dev rather than out of need or altruism.
AI is absolutely terrible for people like that, as it's the perfect enabler.