What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
In my (obviously anecdotal) testing, DeepseekV4 Pro was better than Sonnet at coding. However, it is much slower, but also many times cheaper, especially with the promotion right now.
A 12oz ribeye is a pretty generous serving; it's not something that sneaks up on you.
The surprising thing for me, having settled into a ~1400kcal budget, is how tricky it is to hit protein goals. You go into this thinking it'll be like going low-carb, you'll just eat a lot of beef, but the fun cuts are not efficient protein delivery vehicles. Hitting the kcal budget is effortless; getting the protein in, not so much.
A relevant recent tweet from antirez: https://x.com/antirez/status/2054854124848415211
> Gentle reminder on how, in the recent DS4 fiesta, not just me but every other contributor found GPT 5.5 able to help immensely and Opus completely useless.
I've noticed the same for lower level squeezing-as-much-performance-as-possible code work.
> 60% of evaluated AI Scribe systems mixed up prescribed drugs in patient notes, auditors say
Not mentioned, as far as I can see: the comparative human mistake rate.
Having seen a lot of medical records, 60% sounds about normal lol.
I got this running on a 128GB M5 the other day - pretty painless, model runs in about 80GB of RAM and it seemed to be very capable at writing code and tool execution.
Wouldn't that same logic exclude evidence from Google searches, like "how to get away with murder"?
> if I do it to help me prepare my defense, then the exact same queries would be subject to subpoena/discovery
We need a law where someone can clearly designate a chat privileged, with severe consequences for mis-use.
And yet we trusted Piketty to do it!
"Lucien,
I see no need for this IPv8. IPv6 was carefully engineered over many years and while not perfect, works and is deployed. What problem are you trying to solve? I seem to have missed that."
> Our Code of Conduct states that by signing your name as an author of a paper, each author takes full responsibility for all its contents, irrespective of how the contents were generated (Dieterrich, T. G.)
If you enjoy calling COM vtables, and doing the reference counting by hand, by all means.
Unfortunately, my experience with tirzepatide doesn't make me hopeful: It either gave me terrible diarrhea and sulfur burps, or it did nothing at all, even on 15mg. Hopefully retatrutide is different, but I'm not holding my breath.
It did work for around two weeks, though, and it was great. I constantly felt mildly carsick, so I didn't really want to eat anything, but also didn't have much trouble eating my macros.
The best way I've found to work with LLMs is another OpenAI project, Symphony (which I implemented for Linear/GitHub and OpenCode[0]).
It integrates with your issue tracker and makes the tracker the UI for the LLM. It also clones the repo for every ticket, and can set up fixtures/etc. I can work on multiple items at a time, which is fantastic because otherwise you have to wait for the LLMs a lot.
Apparently both authors would develop a better way to explaining and separating these things if they took some Systems Dynamics[1] courses. Everyone who has taken college level economics was taught that money isn't real (I mean the first money was owning a very large rock on a ledger where that rock might be at the bottom of the ocean. [2]) But what is absolutely real is a contract of labor or materials. Using numbers to allow you to trade contracts for wood and carpenters for a house and using tokens to represent those numbers is sufficient to create a more flexible market than just straight up barter.
[1] https://systemdynamics.org/
[2] https://www.npr.org/sections/money/2011/02/15/131934618/the-...
Are these markers actual text? Or does the model "see" one token per marker?
> How does that change if we assume the strait stays closed?
I think the default assumption is NACHO, not a chance Hormuz opens.
> Tying education to a capital-intensive and (likely soon to be) tightly regulated technology is one more step toward a different, frightening future. A world in which independent educational institutions are neutered and transformed by their reliance on a central authority into factories designed to train students according to the “needs of society” is not a new prospect — it has been the persistent dream of Fabians, technocrats, and engineers…
I hadn’t thought of this. Every school district and university tied into centralized AI inherently undermined its ability to decide how its kids are to be taught.
If they're only showing OpenGraph tags, then yeah, that's a bizarre ban. I can understand banning showing entire stories, which makes people not click through, but OG tags? The site can just change those.
AI camera watching the students?
> AI generated or not, I concur. I rally want to know what Universities will look like in 10 years time. What will be taught there that cannot be taught by an AI (whatever form or interface it has).
Computer-assisted instruction been amazing unsuccessful. Why is that?
Quite strange indeed, given that was one of the main points on their security conference a few months ago.
> It's mostly inaccurate to characterize large groups by their minority fringe members.
Less so when they elect/appoint them to national-level leadership.
"As human beings are also animals, to manage one million animals gives me a headache." Terry Gou, former CEO of Foxconn. He wanted to use far more robots at Foxconn, but that was a decade ago and the technology didn't work well enough yet. It's a lot closer now, and the robot headcount in China is way up.
That's the real issue. To corporations, employees are a headache. The fewer employees, the better.
Now this is interesting, because moving away from foreign cloud vendors hardly helps if everything else stays the same.
Maybe some Jolla sponsoring as well?
Putting info into a spreadsheet is a higher level of abstraction that doesn't require thinking. There are many concrete representations like that. LLMs don't use them much. This is a lack.
Can you point a LLM at a body of code, and tell it "give me a concise UML chart of what this does"? I'm not advocating humans writing UML, but some representation like that may be useful to AIs. Except that they don't really do graphs very well. We may need a specification language intended to be read and written by AIs, readable by humans but seldom written by them. Going directly from natural language specifications to code causes the LLM blithering problem to generate too much code.
Most people who say this didn't/can't happen to them, are the worst cases...
Well, James, forgive me for being so inquisitive; but during the past few weeks, I’ve wondered whether you might be having some second thoughts about the mission.
How good a position can you get from GPS today in receive only mode?
You can download and store Open Street Map for individual states. Map data doesn't have to come in over the air. That's not the problem. It's enhancing GPS with cell phone tower data that's the problem. That requires a cell connection.
> "new enrollments for next year are down close to 20%."
Does this mean that MIT admitted fewer people, or that there are fewer applicants? The article does not seem to say.
Now they can, 30 years ago not really.
Looks into the CVE, ah an heap memory corruption, business as usual.
Not only that, but to legally be a charity, you have to spend at least 5% of your assets every year. So not only do they have to stay ahead of their own growth, they have to spend down 5% of quite a lot!
It may be good but also can be very problematic.
Organizations don't really shrink well. When times are good, they hire a lot of people that are marginally necessary. Over the good times, these roles become well-integrated into how the organization does business; whether or not they were necessary at first, people start depending on that person for a task, their approvals become part of a critical workflow, they develop special institutional knowledge without which the institution won't function, etc. When the organization needs to shrink, the marginally-necessary roles all get laid off. Except now you have all these unfilled dependencies. Other remaining employees depend upon the now-gone employees to do their jobs. Communication processes break. People get demoralized as they realize the organization is broken anyway, and quiet-quit or start looking out for their own self-interest.
You run into Gall's Law in action: "A complex system that doesn't work cannot be patched up to make it work: you have to start over with a working simple system."
Lots and lots of things are going to break as fertility declines and the population shrinks. Education is going to be one of the first ones hit because it explicitly deals with young people, but likely this will go right up to capitalism and the state.
It's beyond contest that the g factor is real in part because it's a statistical inevitability in any series of related tests, be they for intelligence or product/market fit in automobiles. It's an exploratory statistic, a hint at underlying causality; it is not a dispositive revelation about the structure of human thought.
Sure: there's a battery of general cognitive tests, and if you smush data sets together a dominant factor will emerge. And?
Never, a couple of years ago Apple gave up on the server market, that is why having Swift on Linux is so relevant for app developers.
Now they gave up on the workstation market that really enjoys their slots for all myriad of cards.
Having a thunderbolt cable salad is only for those that miss external extensions from 8 and 16 bit home computer days.
Which is clearly what Apple is nowadays focused, if you look back at the vertical integrations before the PC clones market took off.
So now if you really need a workstation, it is either Windows, or one of those systems sold with Red-Hat Enterprise/Ubuntu from IBM, Dell , HP.
> Winter wheat is the dominant variety in the U.S. and is (and is projected to be further) down due to drought.
Both drought and the fertilizer shortage (which, as the article notes, was too late to effect planting decisions but DID impact the costs, and thereby decisions on the applied quantities, of nutrients for the winter wheat crop this year) are impacting winter wheat yields.
Two hours ago you posted "I don’t believe claims made without evidence", lol.
Third world is historically outside the American (first world) and Soviet bloc (second world).
I don‘t think it‘s terribly relevant today, but why beat around the bush? Let‘s call it how we mean it:
Poor.
Right and there's no wars in Ukraine or Iran, they're 'special military operations' or 'excursions.'
And the Ocean Mariner, that they didn't seize, and just escorted out of the area?
I bet the majority of people reading this really think Claude cracked the encryption.
Why not just build affordable housing instead of luxury condos, you're creating a weird false equivalence.
You're missing GP's point. The objection is not to the inclusion of swift bricks in new houses but the belief that it is sufficient to stabilize/restore the population, because relatively few new houses are being constructed.
Solid compromise is Kagi's research assistant. Aggressively cites, unlike Claude. Concise, unlike Grok.
I agree with you, I immediately understood what they are, but what's the problem with more clarification? I've upvoted you.
I believe it's an evolution of the technique used in GPT-Image-1 (or whatever they called that), which was derived from their work on making GPT-4o an "omni" model that can directly output images and audio in addition to text.
The 2024 GPT-4o launch post https://openai.com/index/hello-gpt-4o/ hints about how that works:
"With GPT‑4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network."
> a million different versions of which will get them bad press: “Apple told me to go and pick up my dead child’s cancer medication!”
This is a very tricky one.
>> Know that my son has a test on Thursday and hasn’t opened the revision material since Monday. A gentle nudge, not a surveillance report
This feels like a surveillance report to me. The extent to which adults should surveil their children's devices is hotly disputed. There's one faction which thinks total surveillance should be mandatory (as a solution to the age verification problem or otherwise), and others which believe that children can and should have privacy (are you absolutely sure you should be monitoring your seventeen year old's conversation with their girlfriend?).
Not to mention that it's tracking a family member's interaction with a third party. We can pre-emptively assume the school knows and approves about this one, at least.
> Track our medication schedule and ping people (or me, if someone misses a window) without turning into a clinical monitoring tool.
This feels like the sort of thing where you have weeks of meetings trying to work out whether HIPAA applies or not. It would definitely be valuable. It's also a problem if it's wrong, even if that's entirely down to user error. So people make do with the adhoc version of general purpose calendar entries.
(not to mention the period tracker use case: you want to be careful with technologies which provide an evidence trail that the government have announced they want to use against you)
Maybe Athens and Alcibiades is a better example? Or the Carthaginians being Carthiginians.
"due largely to the heavy new 8% tax on our endowment returns, a burden for MIT and only a few other peer schools"
I went digging. Turns out that's a 2025 "Big Beautiful Bill" thing, which raised that from 1.4% to 8% but only for colleges where the endowment exceeds $2,000,000 per student. Which meant MIT, Stanford, Princeton, Yale, Harvard.
https://waysandmeans.house.gov/2025/05/14/ways-and-means-vot... boasts that this "Holds woke, elite universities that operate more like major corporations and other tax-exempt entities accountable".
> like Pedophile
I really wouldn’t conflate these two. People with documented allegations of child rape are in a separate category from diagnosed-over-the-TV types.
I'm really thankful I put my bitcoin in a time vault back in 2012 or so. It was inaccessible until about last year, and my $10 is now worth $100k.
Thank you MtGox.
Funny, I was just sort of spec'ing this out to myself yesterday.
I'd consider building the system out as an MCP server rather than trying to bundle the agent with it. I had an AI build something out that is just a tasklist that works the way I think about tasks, which I've been using both personally and professionally. It's an MCP server only, which I can expose on the internet with OAuth. It has been surprisingly fun to use, because the AI can spontaneously interact with the information in ways I didn't program in. I have a recurring task with an AI to give me a dump of my current top tasks once a day to my phone.
Professionally, I'm working between a lot of different teams with their own Jira boards and I needed something to use myself to organize and prioritize tasks that can't be prioritized within one place in Jira. With the Atlassian MCP server hooked up to the same agent as my code it is fairly trivial to attach a Jira bug to a task and then prompt the AI to do whatever to the bug attached to this task. I put an explicit field for it in to the task definition but you don't even really need that, just putting the bug in the description is all that was really necessary.
The point I am trying to make here is, you don't even really have to "design" a product at this point. You just need to expose things to the AI so that when the user makes some vague statement about what they want to do it can convert that into concrete calls. The AI and the user will do things with it that you didn't even think of, and users can just add things by saying things in the descriptions of various tasks. I've mentioned how even if AI were to freeze today for the next 10 years we'd still be learning how to use AI and getting more out of it... this is I think a still under-explored application space.
Korea is hardly the developing world, but they're from not-US, basically, which might as well be the developing world as far as the conversation is concerned.
Thank you! I grabbed the first link I found, but the GitHub repo is definitely superior.
> And then they dump it in your lap as being helpful
I've been guilty of this and gotten pushback from my manager: "this feels like homework, cut these options down to 100 words each, max".
Curation and refinement are even more important when you can have genAI generate reams of text.
Seeking outside signals is even more important, like talking to customers, looking at real usage data, and more. It's too easy to trust believe what Claude tells you, even if you say "please argue against this idea", which you always should.
You certainly can sue: "The ruling comes after Meta sued Italy’s national telecommunications regulatory agency (AGCOM) in Italian court in 2023"; that's the normal process for disputing regulatory rulings. Doesn't mean you'll win though.
I'm very much in two minds about this because "news" is not a morally neutral category in itself, such as with similar laws benefiting News Corp in Australia, but it's clear that Meta/FB is a much worse unrestrained actor.
Hey, someone submitted my old article. On my birthday!
Oh, people hate it… and even someone I definitely look up to.
You‘re absolutely right, though, I don‘t remember it being that bad, and probably I just read over it when resurrecting the article, because I‘m so familiar with every word.
I‘ll slap some <hr> tags on it when I‘m back home from my holiday.
Do they not have mice and rats there? This looks like a place those creatures would nest long before a bird got to it.
The author highlights an interesting point. There are a couple variables in action:
A- The difficulty to publish the tool
B- The difficulty to create the tool
C- The usefulness of the tool to others
D- The social reward for publishing the tool
E- The negative incentive of adding a dependency
Difficulty to find a canned solution goes up with A (because someone needs to create it) and B (someone needs to figure out how to publish it), but, the more useful it is to the community (C), the easier it gets to find it, because people will tell you.
If A and B are substantially different, if A is much higher than B, people will tend to write their own and forget about it. If B is lower, there will be fewer solutions to your problem. If A and B are low, and the social reward (C) for A is higher than the price of depending on something else (E), you'll have a leftpad situation. A lot of NPM is made of packages with high C and D and low E.
In the case of Emacs Lisp, A used to be high, but now is low, B (once you climb the learning curve) is low, C, D, and E aren't high either way. This can lead to a scenario where you build the tool before you even look if there is a tool that does it (unlike it is with VSCode, and with Eclipse before it - both have a high B).
I see a thesis someone younger than me will want to bring out to this world.
For years my favorite hackathon kit has been a tablet + cheap bluetooth mouse + cheap bluetooth keyboard. It could be an iPad or an Amazon Fire tablet so long as it can run an RDP client and I can log into my home computer or a big cloud machine.
Static RAM is not DRAM.
You certainly could do try a 20bn cell SRAM, in 155mm^2, if you could handle the routing, but the power consumption might surprise you.
Also, if the game is single-player, you don't care: Simply let the players enjoy the game how they want to enjoy it.
A do nothing C program (int main() { return; }):
$ time ./a.out
real 0m0.002s
user 0m0.000s
sys 0m0.002s
A do-nothing Go program: $ time ./tmp
real 0m0.002s
user 0m0.000s
sys 0m0.003s
I don't believe Go has any optimizations to not start its runtime if it isn't necessary, but when I added spawning a goroutine that immediately blocks on a channel read that will never come the numbers didn't change. That doesn't really time the runtime. Probably the program terminated before the goroutine was scheduled to run anything. It just makes it so there definitely wasn't an early exit because the compiler or the runtime "realized" it didn't need to start the runtime.I'm sure the Go program is somewhat slower to start and end than C, and that we're running into the limits of how quickly processes can be spawned and other timing overhead which is obscuring the difference. However for practical purposes, "it starts up in less than the overhead for starting a process in the shell" is the same speed for most purposes.
Not even a "do nothing" Python program, no Python program at all:
$ time python3 -c 1
real 0m0.012s
user 0m0.008s
sys 0m0.004s
If you had a Go program that was slow to start up, it was your program, not Go. By contrast, Python, and the dynamic scripting languages in general, can be quite slow to start up, just in the reading and compiling of the code. (Even .pyc files, IIRC, take processing, just less processing than Python source code... it's still nowhere near "memory map it in and go" as it is for statically-compiled languages.)
Option 3: Elon takes over the Federal government, causes some major security incidents, and cuts off USAID stranding a number of Federal employees and cutting off short term food support for hundreds of thousands of people depending on it.
Option 4: Elon takes over a social network and tries to Orbanize the West with it.
Header files are libraries as well.
> where we identify public servants with strong technical aptitude across government, bring them into dedicated product teams
> The team’s approach was straightforward. Build working software fast. Put it in front of real users early. Collect feedback. Fix things quickly. Release updates every two weeks.
> That’s a 95% cost reduction. Both systems instead of one. Delivered faster. With 643 users already on the platform
This is a proven solution. These parts, the non-AI management ones, are proven to work in all sorts of places. Gov.uk is another example.
However, there's one massive problem with this: it doesn't involve the free market and it doesn't make any money for corporations to feed back to politicians in campaign donation kickbacks. It even involves respecting civil servants - maybe even paying them market wages! These parts are so heretical that most governments would choose the solution that 10X more expensive and also doesn't work, every single time.
The secret ingredient is money. Money makes everything possible. Money can materialize energy and water from nowhere.
(well, it can't, but it allows you to buy them off poor people, who don't matter)
ty, I accidentally submitted the url with anchor
I think mature sysadmins accept there's a certain .. bushido to their security-critical role. It is after all their job to respond to security threats, including by revoking credentials, and to recognize that they might fall on the wrong side of that some day.
But things are different both in small companies, and non-US environments where minimum notice periods or redundancy consultations are a thing. You may put people on "gardening leave" where they're still paid but not actually working. Or it may be the case that the sysadmin is the one person who knows and controls a lot of stuff, and the employer has ended up relying on them for a smooth handover. Password and role management for the "root" of things is a real problem.
Coding on 8 and 16 bit home computers still required some skills that most vibe coders certainly lack.
No, it's just a cash replacement, the trust lies in expecting people to make the payment in the first place rather than just steal unattended goods.
Basically complete disregard for the history of programming languages and learnt lessons.
Go fits well close to Oberon released in 1987, or Limbo in 1995, when exceptions and generics were still esoteric features.
Instead they had to reach out to Phil Wadler to help them, as he did previously with Java almost a decade earlier, panic/recover is clunky way to do exceptions, instead of doing enumerations like Pascal in 1976, it needs a a iota/const code pattern, hardcoded urls for source repos, if err all over the place like last century programming, many errors are plain strings, ah and nil interfaces what a great gotcha.
It's the same, on steroids.
No article with that title. Flagged.
"AI safety", as defined here, has most of the problem that "fact checking" for social media had. Many of the same problems the "woke" concern about "microagressions" had. Most of the techniques used in advertising. Much of what passes for political discourse today has the same problems. It's somewhat convincing bullshit.
Should AIs be held to a higher standard than X/Twitter? Than Reddit? Than Fox News? What censorship is appropriate? And, yes, alignment is censorship.
Then there's the big problem of chatbots telling you what you seem to want to hear. This is an old problem. "Happy Talk", from South Pacific", is the entertainment version. "Wartime" by Paul Fussell, is the serious version.
As the article points out, a small percentage of the population is very vulnerable to certain types of misinformation. It may be the same fraction of the population that's vulnerable to cults. But maybe not. Cults have a group self-reinforcing mechanism and an agenda. Chatbots have neither. Worth studying.
The point here is that restrictions on chatbots strong enough to protect the vulnerable would close off most political and social discourse.
Do you think there was ethnic favoritism going on?
Bell System historical video on this.[1] This is the popular version, but it's not bad. The more technical version [2].
There's a reason we're not reading monospaced here
You underestimate the number of HN users who are reading this site in their terminal. ;-)
Lots of products have the same fraud/chargeback dynamics are are similar disfavored by payment processors.
> Have you happened to purchase anything in the past 12 months, and looked at the Fed's inflation numbers?
The Fed doesn't issue inflation numbers. The usually cited headline inflation numbers (CPI) are from the Department of Labor’s Bureau of Labor Statistics, the ones used by the Fed as an input to monetary policy decisions (PCE) are issued by the Department of Commerce’s Bureau of Economic Analysis.
Whoah, whoah, whoah, you two, this is a happy post, not an angry post. Nothing to get wound up over! Part of the point is that you can both just go and do you!
Yes, as I said, if we accept your claim at face value, that every dollar of American practitioner-side insurance overhead --- not the delta from Canada, but every single dollar of it --- is mis-spent, you managed to identify 3.6% of the waste in the system. Congratulations.
I said earlier we'd gone round-and-round on this topic before, and I was a little burned out on it, but I didn't expect you to refute your own argument like this. I'm glad we gave it another run this time! This is a great statistic; I'll be using it elsewhere. Thank you.
>Nearly 40% of Stanford undergraduates claim they’re disabled. I’m one of them
https://www.thetimes.com/us/news-today/article/40-percent-st...
"There is no independent audit, no time series, no disclosed methodology, so we have no idea whether the real figure is higher, whether it is growing, or how it compares across the other frontier models, none of which publish equivalent data."
Tip for writers: aggressively filter out the "no X, no Y, no Z" pattern from your writing. Whether or not you used AI to help you write it's such a red flag now that you should be actively avoiding it in anything you publish.
"Cheating" was pointless, because everyone else in the room was struggling just as hard as you were.
That reminds me of what an instructor (one of the best ones I've had) said a long time ago in response to one of my classmates asking if the exam could be open-book: "I could make it so, but it's not going to get any easier." The same instructor also responded to another question with "it doesn't mean I won't change the length of the exam."
Being a high trust society isn’t the same thing as being a fully egalitarian society.
Getting to “high trust for the majority” is the 0 to 1 of civilizational development. Most societies never get there—they’re low trust for everyone.
Participants were 529 (289 men, 234 women, and 6 identified as other) undergraduate business students with a mean age of 18.14 years (SD = 1.19, range 16 to 37).
Sigh. A sample of convenience. Psychology remains the study of undergraduates.
If they wanted real answers, they'd go to bike events.
This is not obvious at all.
Loyalty is a fundamental moral principle. Loyalty to a friend carries a lot of moral weight. Humans are a social animal, and loyalty to a friend can easily outweigh loyalty to some abstract institution. Like, my friend will still have my back five years from now. The university I went to won't do shit for me.
Like, if you're talking about loyalty to a friend who wants you to cover up an unjustified murder they committed, then I think most people will say the value of telling the cops about the murder outweighs the loyalty to your friend.
But for cheating on some test where probably 30% of the other students are cheating anyways? I think the vast majority of people will say that loyalty to your friend is the more important moral principle here. We all make mistakes in life, and the whole idea of loyalty and love to a friend is that we support them even though they make mistakes. As long as the mistakes are common mistakes like cheating on a test or cheating on a boyfriend, as opposed to things like felony crimes.
When you lose access to your projects, does Anthropic acquire the intellectual property? It's a real issue when it's in a machine learning system, not passive storage like Github.
>As far as I know, java has 7 GC implementations, none of which are perfect, all of which have drawbacks
Compared to Python's, all of them are beyond perfect. And 99.9% of the time you don't even need to use anything but the default.
The startup I'm at (ersc.io) is working in this space (version control more than the IDE side of things), because, in my opinion, there just plain isn't any.
>responding to incoming emails / voicetexts.
You need an AI for that?
The idea that America had “goodwill” in other countries before Trump is laughable. Where? Latin America? Africa? In the Muslim world? We bombed the hell out of all those places long before Trump. This most recent Iran war has generated less outrage in the Muslim world than the war against Iraq 20 years ago.
American foreign policy since the 1950s, fixated on fighting communism and then terrorism, has meddled with so many foreign countries that it’s silly to talk about “goodwill” towards America. That is not to say goodwill matters. Clearly the U.S. has done great without it.
Manliness is the confounding factor.
That's what happens when the storage tanks fill up. Nobody can buy it because they have to accept delivery and put it somewhere.
Yes.
"Medicare Advantage" = HMO. All the usual HMO problems.
The best Medigap plan is Plan F, which is no longer available to new subscribers. "Discontinuation of Medicare Plan F was a strategic decision aimed at promoting responsible healthcare spending and ensuring the financial sustainability of the Medicare program." It covers just about everything Medicare doesn't pay, including the various deductibles Medicare has. If Medicare covered Medicare's part, the Plan F provider has to pay their part. They don't get to question it. I don't even see hospital bills, just statements that it's been paid for.
Plan G is one step down from that.
> it doesn’t always make the bright choice
I'm available for a small fee.