What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
This. I've been doing a lot of accessibility work and it seems like the one thing nobody ever does is talk to a screenreader user.
> Pictograms let you parse a lot of information at a glance, because you can pattern match a group of differing symbols much faster than you can a block of text which all looks uniform
No you can’t.
I can relate to, never cared about any of circus around job applications, unfortunely we are not expected to say we do work for money, we have to want to change the world, leave our mark in the universe.
The answer is so obviously "no" for the general case (making even a tiny dent to streaming/digital) that the article's title amounts to clickbait.
That's regardless of the fact that there has always been a vibrant extremely niche cassete scene, the same way there still are 8-bit home computer fans and clubs.
At best, on top of the above, a tiny additional niche of more mainstream "hipster" artists and fans might release/get cassetes as a statement.
Both numbers summed would still be so small compared to the overall music consumption market/methods that implying any sort of "comeback" is ludicrous.
Buried anything is just horrendously expensive. Partly because of other things that are already buried.
It would be great to have the saturn version + translation as well as the improved movie sequences
maybe there is a way to port them using the saturns mpeg add-on (?)
otoh probably fine to watch them on youtube in parallel
Something I'm desperately keen to see is AI-assisted accessibility testing.
I'm not convinced at all by most of the heuristic-driven ARIA scanning tools. I don't want to know if my app appears to have the right ARIA attributes set - I want to know if my features work for screenreader users.
What I really want is for a Claude Code style agent to be able to drive my application in an automated fashion via a screenreader and record audio for me of successful or failed attempts to achieve goals.
Think Playwright browser tests but for popular screenreaders instead.
Every now and then I check to see if this is a solved problem yet.
I think we are close. https://www.guidepup.dev/ looks extremely promising - though I think it only supports VoiceOver on macOS or NVDA on Windows, which is a shame since asynchronous coding agent tools like Codex CLI and Claude Code for web only run Linux.
What I haven't seen yet is someone closing the loop on ensuring agentic tools like Claude Code can successfully drive these mechanisms.
Go was created because Rob Pike hates C++, notice Plan 9 and Inferno don't have C++ compilers, even though C++ was born on UNIX at Bell Labs.
As for compilation times, yes that is an issue, they could have switched to Java as other Google departments were doing, with some JNI if needed.
As sidenote, Kubernetes was started in Java and only switching to Go after some Go folks joined the team and advocated for the rewrite, see related FOSDEM talk.
Europe has just started making a few inroads in a few places. Like the Schleswig-Holstein question. This is basically 1% of what would need to be done to be secure against state mandated compromise of Microsoft.
https://www.heise.de/en/news/Goodbye-Microsoft-Schleswig-Hol...
>Isn't the logical endpoint of this equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting?
Isn't the answer on SO the result of a human intelligence writing it in the first place, and then voted by several human intelligencies to top place? If an LLM was merely an automated "equivalent" to that, that's already a good thing!
But in general, the LLM answer you appear to dismiss amounts to a lot more:
Having an close-to-good-human-level programmer
understand your existing codebase
answer questions about your existing codebase
answer questions about changes you want to make
on demand (not confined to copying SO answers)
interactively
and even being able to go in and make the changes
That amounts to "manually typing an SO answer" about as much as a pickup truck amounts to a horse carriage.Or, to put it another way, isn't "the logical endpoint" of hiring another programmer and asking them to fix X "equivalent to printing out a Stackoverflow answer and manually typing it into their computer"?
>And I pick Stackoverflow deliberately: it's a great resources, but not reliable enough to use blindly. I feel we are in a similar situation with AI at the moment.
Well, we shouldn't be using either blindly anyway. Not even the input of another human programmer (that's way we do PR reviews).
Do you have any plans to add timeouts or some other mechanism for limiting the amount of CPU a webassembly call can use?
I'm always interested in options for using WebAssembly as a sandbox to run untrusted code, but one of the things I need to protect against is an infinite loop.
(I had Claude knock up an experimental Python binding to try Epsilon out, notes from that here: https://github.com/simonw/research/tree/main/epsilon-python-... )
That is the whole point, assume there was no AMD64 to start with.
True, but you could make a warehouse of sysplexes work together using the same mechanisms we use for warehouses of generic servers, but, if each system takes four racks, and one sysplex takes 128 racks, it’ll be thousands of times fewer systems to be coordinated.
All that would remain is an eye-watering hardware and licensing bill.
This was perhaps my favorite part of Physics 390 ("modern physics") which was about quantum dynamics and relativity. The speed of light is defined in terms of a velocity (~300,000,000 m/s) but if you were traveling at the speed of light time stops (which keeps the rule that its constant in all frames of reference). That and time passes more quickly at higher altitudes and these days we can actually measure that. Wild stuff.
SaaS is the only solution so far that has worked against piracy, and helping open source devs whose entitled downstream users don't care about how they sustain themselves.
The last cafeteria chain seen in the SF Bay Area was Fresh Choice.[1] The business problem was that the outlets attracted too many old people who didn't buy the more expensive add-on items but just bought the fixed-price salad bar.
Today, the only remaining cafeterias in the SF bay area seem to be in-house feeding operations for employees. Many of the ones in hospitals are open to the public.
There's a minimum traffic level for a cafeteria, and it's fairly high. With low traffic, the food sits out too long and becomes leftovers. Like Whole Foods' salad and hot bars.
Restaurants seem to have fads. In the SF bay area, there are few French restaurants any more. California cuisine is dead. Fish is down. (Amusingly, on Doordash, all types of fish are treated as synonyms for "salmon".) Many restaurants no longer serve bread.
Strong disagree. This trend has been underway for a few years already. There are a few reasons for this:
1. Musicians love tape. We like the frequency roll-off, we like the imprecision - but these are nostalgia. What we like most is the with tape your options are largely reduced to Record and Play, because doing anything more complicated (eg editing via punch-ins with synchronization) is such a PITA. They're a great tool for just making you commit to a performance instead of editing it to death.
2. In similar fashion, young people are fascinated by a medium you have to sit through by default, because skipping around is inconvenient and might damage your tape. Not being able to listen nonlinearly promotes a different sort of engagement with the material from the fragmented one provided by streaming. To a lesser extent, music on tape has better dynamics not because the medium is superior, but because maximizing loudness over the entire track means the whole recording will be saturated. This is desirable in some genres (metal, some kinds of dance music), but most cassette recordings avoid maximizing loudness which sounds refreshingly different to people who grew up during the Loudness War.
3. Chinese bootlegs. In the 80s and 90s China was a target country for first world garbage disposal, so unsold CDs and cassettes would be damaged by being run through a table saw and then shipped to China in bulk for recycling, sold by weight. While publication or importation of western music was heavily restricted by censors, garbage imports were uncontrolled, and enterprising minds soon observed that damaged media could often be rendered playable, at at least in part. This led to the emergence of a "dakou" (打口 - saw cut) music scene, with parts of albums being sold to enthusiasts in semi underground stores with no regard to release date, genre, or marketing campaigns. This had a big impact on China's domestic music scene.
4. Differing media preferences. Other countries (but Japan in particular) never lost the taste for physical media the way Anglosphere countries did. Japan was always record collectors' paradise because industry cartelism kept the price of physical media high, but buyers were rewarded with high production quality of CD mastering, vinyl grade, and printed media, and labels would typically add bonus tracks exclusive to the Japanese editions of albums. A combination of Japanese taste for the best-quality version of something and 30+ years of economic stagnation meant Japanese consumers were more into maintaining and using their hifi equipment; if you watch Japanese TV dramas a fancy stereo is still a common status marker, much like expensive furniture. Record stores are still a big deal, and music appreciation its own distinct hobby and and social activity in a way that fell out of fashion in other countries.
5. Developing world and cheap distribution. Cassettes were popular in Africa and other developing economies for decades for reasons that should be obvious, and they're popular again with emerging/underground artists for similar reasons. You can self-release on cassette very very cheaply, at the loss of time efficiency. You won't make much money doing this, but you can make a bit, and it's a way to target serious fans who like collecting things and want to support obscure and cool artists who have not yet got big and sold out. Also making $3 on a cassette sale through Bandcamp or at a show may be easier than 1-2000 plays on Spotify or some other service for artists who are not already famous. Self-releasing on vinyl is also possible but typically you need to invest $1-2000, whereas you can get into duplicating your own cassettes for $50 or a few hundred $ in bulk. Vinyl is the way to go if you need to reach DJs but cassette players are dirt cheap or free for consumers and are less effort to use than a record player.
Physical media are still a Big Deal for people who obsess over music, who care about quite different things from the median consumer.
What distinguishes an Eames chair on display at the Cooper Hewitt from the same chair on display at MoMA or countless other museums in the world? What distinguishes it from the same chair on display, and for sale, at the Herman Miller showroom?
What, if not the stories that the institutions who collect these objects tell about them?
One of them is near enough to be a visited by me on a day trip. I can understand design museums being essentially franchised showrooms for contemporary culture objects, but I think he asks some reasonable questions about the point of curation and the role of museums in moden society.
No. They are a multi-generational institution at this point and they are constantly evolving. If you work there it definitely FEELS like they are dying because the thing you spent the last 10 years of your career on is going away and was once heralded as the "next big thing." That said, IBM fascinated me when I was acquired by them because it is like a living organism. Hard to kill, fully enmeshed in both the business and political fabric of things and so ultimately able to sustain market shifts.
I largely agree. Plato leveled somewhat similar criticisms at the early use of the written word millenia ago. I think what's fundamentally different with internet communication is the timed nature of the medium, conveying a sense of pseudo-urgency that necessitates disagreements be input in a timely fashion, and that failure to do so will imply correctness or at least tacit agreement.
Incidentally, I find your comment significantly more substantive and thoughtful than Weinstein's.
We surely have, Metal, CUDA, Pix, and PS/Switch also have their.
This is exactly yet another reason why researchers prefer CUDA, to the alternatives.
> I may eventually get to the wall label part but this is tough.
Good luck. After the first few paragraphs I thought of a great quote that I heard somewhere: "Twitter ruined my reading skills, but it vastly improved my writing skills."
If you're trying to actually get a point across (vs. writing something that is just read for pleasure) GET TO THE DAMN POINT.
> plenty of developer talent
> number of humans that are literate enough in business, marketing, communications, and software development to pull this off
There aren’t the same thing.
> “Remake microsoft office suite, but cheaper” won’t work
Probably not. But adapt open-source software for New Zealand’s government can. It just takes a rare combination of technical skill, executive function, leadership ability and emotional self-control to pull off.
I feel like this sort of misses the point. I didn't think the primary thrust of his article was so much about the specific details of AI, or what kind of tasks AI can now surpass humans on. I think it was more of a general analysis (and very well written IMO) that even when new technologies advance in a slow, linear progression, the point at which they overtake an earlier technology (or "horses" in this case), happens very quickly - it's the tipping point at which the old tech surpasses the new. For some reason I thought of Hemingway's old adage "How did you go bankrupt? - Slowly at first, then all at once."
I agree with all the limitations you've written about the current state of AI and LLMs. But the fact is that the tech behind AI and LLMs never really gets worse. I also agree that just scaling and more compute will probably be a dead end, but that doesn't mean that I don't think that progress will still happen even when/if those barriers are broadly realized.
Unless you really believe human brains have some sort of "secret special sauce" (and, FWIW, I think it's possible - the ability of consciousness/sentience to arise from "dumb matter" is something that I don't think scientists have adequately explained or even really theorized), the steady progress of AI should, eventually, surpass human capabilities, and when it does, it will happen "all at once".
I read the abstract (not the whole paper) and the great summarizing comments here.
Beyond the practical implications of this (i.e. reduced training and inference costs), I'm curious if this has any consequences for "philosophy of the mind"-type of stuff. That is, does this sentence from the abstract, "we identify universal subspaces capturing majority variance in just a few principal directions", imply that all of these various models, across vastly different domains, share a large set of common "plumbing", if you will? Am I understanding that correctly? It just sounds like it could have huge relevance to how various "thinking" (and I know, I know, those scare quotes are doing a lot of work) systems compose their knowledge.
What you've re-invented is Keydoozle, from 1937.[1] This was the first automated grocery store. Three stores were opened, but there were enough mechanical problems that it didn't work well.
[1] https://rarehistoricalphotos.com/keedoozle-automated-store-p...
> extend to which some people get food delivered is absurd
More than getting uppity online about others' personal dining choices?
I think by the time the wealthy realize they're setting themselves up for the local equivalent of the French Revolution it will be a bit late. It's a really bad idea to create a large number of people with absolutely nothing to lose.
> PA tried to clean hamas from those areas, got it ass kicked and asked Israel for help.
PA has to ask Israel to do anything of substance in the WB, because the WB is a mix of Israeli and PA controlled areas, with Israel controlling internal boundaries even between adjacent PA-controlled areas.
> Major reason why there are no elections in west bank, it's because current palestinian government knows that hamas will win
No, the only reason is that Israel has refused to cooperate with all-Palestine elections negotiated and agreed to between Fatah (the party in control of the PA government) and Hamas, on multiple occasions (because Israel administers parts of the WB, and for other reasons, active Israeli cooperation would be necessary.)
Most likely because the divide between the Fatah-led PA and Hamas, and the ability to portray both as undemocratic, serves Israeli's propaganda and other interests.
H.T.M.L. I'll grant it's a little awkward on a phone but I never get tired of using the usual HN web interface on a tablet.
> Authored by global consulting firm Deloitte and published by the Department of Health and Community Services in May, Newfoundland and Labrador’s Health Human Resources Plan contains at least four citations which do not, or appear not to, exist. The report cost the province nearly $1.6M, according to documents obtained through an access to information request and published on blogger Matt Barter’s website.
But that's the point. The icons help you find the "delete" section.
Icons aren't large enough to then also distinguish between deleting a row or column or table. That's what the label is for.
It's not laziness, it's good design.
I've seen some apps that have icons on menu items when those icons are used for the same functions in other UI elements (shortcut bars, etc.) that don't require digging into the menus, functioning as kind of a reminder that "you can do this elsewhere where you see this symbol". It is kind of like an inverse tooltip (where a tooltip you get by checking the icon and discovering the action description, this you get to by going to the action in the menu and discovering the icon.)
I think this is a useful pattern, but I'm not convinced that having specific distinct icons for menu items to highlight them as important is useful. Presentation order and/or simply a consistent difference in presentation for the highlighted items makes more sense.
My grandfather started his career in the Polish mafia and eventually went straight to work as a laborer and then a bricklayer, he was told he had an "enlarged heart".
I got a bunch of workups because I fainted when exercising on a hot day when I'd eaten a bit too much and was a little infatuated with the instructor (I was the only student.) I got the plain echo and a stress echo and was told I had athlete's heart which they are not so sure if it is good or bad. I also wore a Holter monitor for 30 days and literally at the last 5 minutes of the period I had two bad heart beats so now I have A-Fib on my chart and my doc says the athlete's heart can make it worse so since then I'm only supposed to do an hour a day of cardio.
When all this was going on I had manifested an "evil twin" whose harebraned scheme had just blown up right at the time I had those two bad heartbeats and I can't believe the emotional distress I'd caused myself hadn't had anything to do it. I have one of those Kardia cards and haven't seen it happen again.
>For example, the “Settings” menu item (third from the top) has an icon. But the other item in its grouping “Privacy Report” does not. I wonder why?
Isn't it obvious? Because compared to "Settings" it is a far less important infrequently used setting.
More than that, Trump said yesterday that Netflix's purchase of WB "might be problematic" and that he would be "personally involved in the decision of approving it".
He's trying to shakedown Netflix to pay fealty.
California is now coal free as the world’s fourth largest economy.
Spotify is trying to avoid paying artists and keep revenue from Spotify customers for itself.
Yes! I wouldn't want to be the last patient to not be revived, but that's just regular death anyway.
> Roman Empire merely improved roads in many places
/s? This is literally a Monty Python sketch.
In fact that is where AI could win. An in house system only needs to serve the needs of one customer whereas the SAAS has to be built for the imagined needs of many customers —- when you’re lucky you can “build one to throw away” and not throw it away.
Nobody wants to ship that! They want perpetually upgraded live service games instead, because that's recurring revenue.
“The green groups, including Greenpeace…”
You'd be hearing a few more hair raising failure stories if they hadn't done that. And possibly a few big customers institute Win11 or Copilot site wide bans. Or just straight up going out of business. In businesses other than software, it's possible to be legally liable for mistakes.
The cost of writing simple code has dropped 90%.
If you can reduce a problem to a point where it can be solved by simple code you can get the rest of the solution very quickly.
Reducing a problem to a point where it can be solved with simple code takes a lot of skill and experience and is generally still quite a time-consuming process.
> No Gemini in Google apps unless you're paying for Google AI
Not true. Gemini in Google Apps (Gemini for Workspace) is included by default as a set of core features (Help me write in Docs, Gemini side panel in Docs/Sheets/Meet, etc.). The AI Pro tier of Google One adds additional AI functionality, (billed annually, which seems the correct comparison given all your other price quotes are per year and seem to use annual billing pricing, it is $199, or $100 more than the 2TB tier without AI Pro.)
> the midterms runoffs
Do you mean primaries? Runoffs are a thing in some elections in the US, but not a thing that would start in spring for the congressional midterms.
This is just a tl;dr of the article with a mean-spirited barb added.
I've settled in on this as well for most of my day-to-day coding. A lot of extremely fancy tab completion, using the agent only for manipulation tasks I can carefully define. I'm currently in a "write lots of code" mode which affects that, I think. In a maintenance mode I could see doing more agent prompting. It gives me a chance to catch things early and then put in a correct pattern for it to continue forward with. And honestly for a lot of tasks it's not particularly slower than "ask it to do something, correct its five errors, tweak the prompt" work flow.
I've had net-time-savings with bigger agentic tasks, but I still have to check it line-by-line when it is done, because it takes lazy shortcuts and sometimes just outright gets things wrong.
Big productivity boost, it takes out the worst of my job, but I still can't trust it at much above the micro scale.
I wish I could give a system prompt for the tab complete; there's a couple of things it does over and over that I'm sure I could prompt away but there's no way to feed that in that I know of.
And the US is for young people? Europe is for citizens, the US is an exploitation engine for consumers.
https://www.ted.com/talks/scott_galloway_how_the_us_is_destr...
(Doesn’t touch on unaffordable housing prices, social security benefits being cut 20% in a decade when the “trust fund” is exhausted, lack of opportunity and living wages, etc in the US)
It's so broken. I have a suspicion the AI teams are in some kind of an internal standoff with privacy/compliance teams, and Copilot simply never gets access to any tools or data.
Because if not that, then I don't know what. I can half-ass a better product with ChatGPT API and a PowerShell script, and I could've since GPT-4 was released; in fact the product can actually write itself, it's that simple to do a better job.
It is discouraged to post AI-generated comments to Hacker News, even if disclosed.
I think all the em-dashes came from scraping Wordpress blogs. Wordpress editor does "typography", then thus introduced em-dashes survive HTML to Markdown process used to scrap them, and end up in datasets.
EDIT: Also PDFs authored in MS Word.
"The rise in early-onset cancer incidence does not consistently signal a rise in the occurrence of clinically meaningful cancer. While some of the increase in early-onset cancer is likely clinically meaningful, it appears small and limited to a few cancer sites. Much of the increase appears to reflect increased diagnostic scrutiny and overdiagnosis. Interpreting rising incidence as an epidemic of disease may lead to unnecessary screening and treatment while also diverting attention from other more pressing health threats in young adults" [1].
[1] https://jamanetwork.com/journals/jamainternalmedicine/articl...
Are we treating MS like a start-up now?
No, but the biggest users who pay the most do.
Can you buy Uber's "Ride of Glory" data? [1]
[1] https://www.cbsnews.com/sanfrancisco/news/uber-crunches-user...
The proposed solution only works for answers where objective validation is easy. That's a start, but it's not going to make a big dent in the hallucination problem.
There's a principle in distributed systems that you can't really count on clocks to be synchronized in a very large system but the thing about Parallel Sysplex is that it is not particularly scalable, it maxes out at 32 nodes but those nodes are pretty big -- the system overall is big enough for most of what the Fortune 500 does but tiny compared to Google, Facebook or a handful of really big systems. Sysplex revolves around distributed data structures similar to what Hazelcast provided in the beginning.
> The judge has the power to declare the Sheriff professionally incompetent
Do they?
In safety critical contexts, you're not usually using the standard library. Or at least, you're using core, not alloc or std.
Panics can still exist, of course, but depending on the system design you probably don't want them either, which is a bit more difficult to remove but not the end of the world.
I hadn't seen that addendum though yet, that's very cool!
I read all about the Hills when I was a kid, also in NH.
The basic thing I’d say is you cannot trust memories “recovered” under hypnosis at all. See also
https://en.wikipedia.org/wiki/Dissociative_identity_disorder
which is also a product of hypnosis therapy. I mean, it works for a lot of things, but not for getting the truth.
And then you install that 'security patch' and end up with a borked phone, apps that no longer work, new apps that you didn't ask for and so on.
Give me just the security updates please.
I’m actually somewhat stoked about generative AI from a “good enough” perspective, because at this inflection point where a lot of countries and organizations are looking for Microsoft alternatives (digital sovereignty, etc), this is the best time to be able to build and deploy alternatives with the productivity advantages (if any) AI might provide.
Big Tech thinks they have a moat, when it’s really diffuse power being made available via genAI to build software good enough to replace them.
I vote to just change the spelling to what almost everyone already thinks it is anyways.
It'll still be just as weird. But "chs" is just nonsensical. The idea that it would sound like "sh" is baffling. I mean, I know this is English spelling which is not known for its regularity, but this is just too much.
That doesn't sound like a problem with twin studies exploring the degree to which IQ is genetic, that sounds like a problem with people treating aggregate tendencies and associations as a basis for individual discrimination.
"Neurotypical"/"Neurodivergent" does the same thing, it just specifies the domain of abnormality. It is still better than "normal", but the difference is of degree rather than kind.
If you are specifically distinguishing autistic and not-autistic, "allistic" is more specific than "neurotypical" (one can be neurodivergent and not autistic) and also avoids any implication than one side is normal and the other is not. (Unfortunately, there is no very good direct replacement for "neurotypical"/"neurodivergent", but one can minimize the impact of that problem by not using them when the real concern is about presence or absence of a particular trait that is within the broad array deemed "neurodivergent".)
Lucky you, I have done that regardless of the project type, when the client wasn't happy with the x for $y delivery, and delays payments until having their beloved Excel sheets.
I have also had to provide technical support in escalation meetings, predating delivery of said sheets.
> So your comment boils down to; plagiarism is fine as long as I don't have to think about it.
It is actually worse: plagiarism is fine if I'm shielded from such claims by using a digital mixer. When criminals use crypto tumblers to hide their involvement we tend to see that as proof of intent, not as absolution.
LLMs are copyright tumblers.
> The "world model" is what we often refer to as the "context".
No, we often do not, and when we do that's just plain wrong.
Myth: new knowledge never trumps old knowledge. Check the dates on those two publications.
> Bitcoin, and really all crypto 'currencies' were never meant to be currencies at all.
To be fair, there is a significant amount of disagreement about what a "currency" is supposed to be, and there is a large subset of people who believe that the desirable traits in a currency are exactly those things that make it function well as a speculative asset (notably, on average over a long time, value with respect to goods is at least flat and preferrably increasing) while simultaneously not thinking the things that another large group of people sees as desirable for a currency (e.g., lack of extreme short-term volatility) are important.
I can't speak to the original designer of Bitcoin, but I wouldn't be surprised if it and most cryptocurrencies were designed to be currencies, just by people who have a very specific (and, IMV, wrong) idea of what a currency ought to be.
(David Bessis is a fan favorite here, and Paul Graham makes an appearance.)
Related, from last year: https://news.ycombinator.com/item?id=42200209
See WinUI after Project Reunion announcement 5 years ago, unfortunately fits exactly the same description, and we are way past COVID to use that as an excuse.
> In the earliest days of getting people to pay for cable TV when OTA was free, the pitch was that you'd see fewer/no commercials.
No, it was quality of reception, especially for people who were farther from (or had inconvenient terrain between them and) broadcast stations; literally the only thing on early capable was exactly the normal broadcast feed from the covered stations, which naturally included all the normal ads.
Premium add-on channels that charged on top of cable, of which I think HBO was the first, had being ad free among their selling points, but that was never part of the basic cable deal.
Unfortunately that is probably how it will end.
The fix for having worktrees be colocated is in progress. Not sure when it’ll be done but it’s coming.
Wow! I liked https://pubmed.ncbi.nlm.nih.gov/35408868/
Without regulation, you have no protections against these corporate actions. If you’re expecting or relying on corporations to act in good faith, you are going to be disappointed.
> Instead they had to give “goodies” personally to Trump in the form of a $15 million bribe
More of an in addition to than instead.
Employees are not IT and Infosec teams. What an employee wants as it relates to a corp system is mostly irrelevant, as the company owns and governs access to the system. It is not the employee’s data, broadly speaking.
You think data tied to individual users isn't any worse? That privacy has no value?
Oh thanks, that's really interesting. It makes more sense if there's a physical box involved -- I don't remember ever getting physical boxes for preinstalled software, but it makes sense that way.
>LLMs are text model, not world models and that is the root cause of the problem.
Is it though? In the end, the information in the training texts is a distilled proxy for the world, and the weighted model ends up being a world model, just an once-removed one.
Text is not that different to visual information in that regard (and humans base their world model on both).
>Not having a world model is a massive disadvantage when dealing with facts, the facts are supposed to re-inforce each other, if you allow even a single fact that is nonsense then you can very confidently deviate into what at best would be misguided science fiction, and at worst is going to end up being used as a basis to build an edifice on that simply has no support.
Regular humans believe all kinds of facts that are nonsense, many others that are wrong, and quite a few that are even counter to logic too.
And short of omnipresense and omniscience, directly examining the whole world, any world model (human or AI), is built on sets of facts many of which might not be true or valid to begin with.
Seriously. I don't understand why this is even news or posted here.
If they were doubling prices or something then of course, raise the alarms. This is not that.
Let's call it "token representation not based on merit" and it doesn't sound that back walking them back
Paramount bids $30 all cash for all of Warner Brothers Discovery. Netflix bids $27.75 “for Warner’s studio and HBO Max streaming business” only [1]. (“$23.25 in cash and $4.50 in shares” [2].)
The latter leaves behind “sports and news television brands around the world including CNN, TNT Sports in the U.S., and Discovery, top free-to-air channels across Europe, and digital products such as the profitable Discovery+ streaming service and Bleacher Report (B/R)” [3]. (Paramount is effectively bidding $5.9bn for these assets.)
Note that Zaslav, Warner’s CEO, is a prominent donor to Democrats [4], as is Reed Hastings, Netflix’s co-founder [5]. (Ted Sarandos, Netflix’s co-CEO with Greg Peters, is mixed, leaning Dem [6]. No clue on the latter.) Ellison is a staunch Trump ally. The partisan tinge will be difficult to ignore.
[1] https://www.wsj.com/business/media/paramount-makes-hostile-t...
[2] https://about.netflix.com/en/news/netflix-to-acquire-warner-...
[3] https://www.wbd.com/news/warner-bros-discovery-separate-two-...
[4] https://www.opensecrets.org/donor-lookup/results?name=david+...
[5] https://www.nytimes.com/2024/07/03/us/politics/reed-hastings...
[6] https://www.opensecrets.org/donor-lookup/results?name=Ted+Sa...
I find online grocery shopping shines for heavy and bulky things that are a huge pain to schlep home otherwise, especially stuff that lasts a while.
All juices/waters/beers/wines, paper towels, lots of oranges/grapefruits, cleaning products like bleach/detergent, etc. When they carry them up to your fourth-floor door it's just so much easier.
The smallish shops are good for stuff you can then easily carry in a bag by hand -- meat, veg, cheese, fresh bread.