Fox News Plans a “Super Desk” and Virtual Set For Election Night Coverage (Exclusive)
... Whereas Hemmer (and King and Kornacki) previously relied on a touchscreen that is more akin to a giant iPhone, Hemmer will now be able to display data in 3D space on set, interacting with the data thanks to infrared sensors that can track him. The company is also using a voice to text language model, allowing Hemmer to call up data just by speaking it.
Among other bells and whistles are a “path to 270 map,” which will place a 3D map in the studio letting anchors explain how the electoral votes are stacking up, as well as a “top 5 closest races” tool, which will visualize certain key close races.
And Studio M itself is getting an upgrade, including a “Super Desk” which will be the home base for anchors, correspondents and analysts over the course of election night. Fox also built a two-story tall accent wall, as well as a 20-foot long ultra HD video wall that will display photos and video over the course of the evening. ...
See the full story here: https://www.hollywoodreporter.com/news/politics-news/fox-news-election-night-super-desk-augmented-reality-1236045655/
Michael Parkinson’s son defends new AI podcast
...
In 2022, the union Equity launched a "Stop AI Stealing the Show" campaign. The use of AI was a major factor in the strikes that brought Hollywood to a standstill last year.
However, as Sir Michael is dead and therefore no longer has a livelihood to protect, the debate in this case is more about whether or not it is ethical to have him say things he never said in real life, and also whether AI versions of real hosts is something listeners even want. ...
Explaining how the podcast would work, Anderson said: "These are brand new interviews, and the AI we’ve created is as close to the late Sir Michael as we could possibly get it.
"He is autonomous, so we let him start the interview and after that it is up to AI Sir Michael, who is trained on Sir Michael’s style and the interview questions.
He added: "We can’t tell you the guests yet, we have a few slots remaining, but they are notable, noteworthy people." ...
See the full story here: https://www.bbc.com/news/articles/cn42x2gxl0jo
Haptic artificial muscle skin for extended reality
Abstract
Existing haptic actuators are often rigid and limited in their ability to replicate real-world tactile sensations. We present a wearable haptic artificial muscle skin (HAMS) based on fully soft, millimeter-scale, multilayer dielectric elastomer actuators (DEAs) capable of significant out-of-plane deformation, a capability that typically requires rigid or liquid biasing. The DEAs use a thickness-varying multilayer structure to achieve large out-of-plane displacement and force, maintaining comfort and wearability. Experimental results demonstrate that HAMS can produce complex tactile feedback with high perception accuracy. Moreover, we show that HAMS can be integrated into extended reality (XR) systems, enhancing immersion and offering potential applications in entertainment, education, and assistive technologies.
See the full paper here: https://www.science.org/doi/10.1126/sciadv.adr1765
CAA Inks Deal With AI Company Veritone To Store Talent’s Digital Assets
In a sign of the times, Creative Artists Agency has inked a deal with an artificial intelligence company to store clients’ digital assets.
CAA and Veritone announced their partnership on Tuesday, leading to the creation of the “CAAVault” - a synthetic media vault that will store all intellectual property related to all CAA talent’s name, image and likeness. This includes digital scans and voice recordings. ...See the full story here: https://www.msn.com/en-us/money/companies/caa-inks-deal-with-ai-company-veritone-to-store-talent-s-digital-assets/ar-BB1mmFiU?apiversion=v2&noservercache=1&domshim=1&renderwebcomponents=1&wcseo=1&batchservertelemetry=1&noservertelemetry=1
Disney plans AI launch to transform entertainment production
.... The multinational mass media and entertainment conglomerate is said to be gearing up to “transform its creative output” with its latest venture into the realm of artificial intelligence and will involve “hundreds” of employees in the move, according to The Wrap. ...
“Don’t fixate on its ability to be disruptive — fixate on [tech’s] ability to make us better and tell better stories. Not only better stories, but to reach more people,” the Disney CEO continued. “You’re never going to get in the way of it. There isn’t a generation of human beings that has ever been able to stand the way of technological advancement.” ...
See the full story here: https://rollingout.com/2024/10/25/disney-plansr-ai-launch-entertainment/
Why I’m Leaving OpenAI and What I’m Doing Next
The TL;DR is:
- I want to spend more time working on issues that cut across the whole AI industry, to have more freedom to publish, and to be more independent;
- I will be starting a nonprofit and/or joining an existing one and will focus on AI policy research and advocacy, since I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so;
- Some areas of research interest for me include assessment/forecasting of AI progress, regulation of frontier AI safety and security, economic impacts of AI, acceleration of beneficial AI applications, compute governance, and overall “AI grand strategy”;
- I think OpenAI remains an exciting place for many kinds of work to happen, and I’m excited to see the team continue to ramp up investment in safety culture and processes;
- I’m interested in talking to folks who might want to advise or collaborate on my next steps.
See the full post here: https://milesbrundage.substack.com/p/why-im-leaving-openai-and-what-im?utm_source=substack&utm_medium=email
Using AI for Political Polling
... Despite these frailties, obsessive interest in polling nonetheless consumes our politics. Headlines more likely tout the latest changes in polling numbers than the policy issues at stake in the campaign. This is a tragedy for a democracy. We should treat elections like choices that have consequences for our lives and well-being, not contests to decide who gets which cushy job. ...
Large language models, the AI foundations behind tools like ChatGPT, are built on top of huge corpuses of data culled from the Internet. These are models trained to recapitulate what millions of real people have written in response to endless topics, contexts, and scenarios. For a decade or more, campaigns have trawled social media, looking for hints and glimmers of how people are reacting to the latest political news. This makes asking questions of an AI chatbot similar in spirit to doing analytics on social media, except that they are generative: you can ask them new questions that no one has ever posted about before, you can generate more data from populations too small to measure robustly, and you can immediately ask clarifying questions of your simulated constituents to better understand their reasoning. ...
Our major systemic failure happened on a question about US intervention in the Ukraine war. In our experiments, the AI agents conditioned to be liberal were predominantly opposed to US intervention in Ukraine and likened it to the Iraq war. Conservative AI agents gave hawkish responses supportive of US intervention. This is pretty much what most political experts would have expected of the political equilibrium in US foreign policy at the start of the decade but was exactly wrong in the politics of today. ...
While AI models are dependent on the data they are trained with, and all the limitations inherent in that, what makes AI agents special is that they can automatically source and incorporate new data at the time they are asked a question. AI models can update the context in which they generate opinions by learning from the same sources that humans do. ...
For all the hand-wringing and consternation over the accuracy of US political polling, national issue surveys still tend to be accurate to within a few percentage points. ...
Where AI will work best is as an augmentation of more traditional human polls. Over time, AI tools will get better at anticipating human responses, and also at knowing when they will be most wrong or uncertain. They will recognize which issues and human communities are in the most flux, where the model’s training data is liable to steer it in the wrong direction. In those cases, AI models can send up a white flag and indicate that they need to engage human respondents to calibrate to real people’s perspectives. ...
We expect these AI-assisted polls will be initially used internally by campaigns, with news organizations relying on more traditional techniques. It will take a major election where AI is right and humans are wrong to change that.
See the full story here: https://ash.harvard.edu/articles/using-ai-for-political-polling/
Shelly Palmer email
...
In the news: More than 13,000 creatives (including some famous authors, musicians, and actors) signed a statement that expresses their growing concerns over the unauthorized use of copyrighted works to train generative AI models. The one-sentence statement published by Fairly Trained, an advocacy group founded by former Stability AI executive Ed Newton-Rex, reads: "The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted."
Newton-Rex told The Guardian, “There are three key resources that generative AI companies need to build AI models: people, compute, and data. They spend vast sums on the first two – sometimes a million dollars per engineer, and up to a billion dollars per model. But they expect to take the third – training data – for free.” ...
When you give a Claude a mouse
... Most importantly, I was presented finished drafts to comment on, not a process to manage. I simply delegated a complex task and walked away from my computer, checking back later to see what it did (the system is quite slow). ...
But what made this interesting is that the AI had a strategy, and it was willing to revise it based on what it learned. I am not sure how that strategy was developed by the AI, but the plans were forward-looking across dozens of moves and insightful. ...
Before the system crashed - which was not a problem with Claude but rather with the virtual desktop I was using - the AI made over 100 independent moves without asking me any questions. ...
I reloaded the agent and had it continue the game from where we left off, but I gave it a bit of a hint: you are a computer, use your abilities. It then realized it could write code to automate the game - a tool building its own tool. Again, however, the limits of the AI came into play, and the code did not quite work, so it decided to go back to the old-fashioned way of using a mouse and keyboard. ...
What does this mean?
You can see the power and weaknesses of the current state of agents from this example. On the powerful side, Claude was able to handle a real-world example of a game in the wild, develop a long-term strategy, and execute on it. It was flexible in the face of most errors, and persistent. It did clever things like A/B testing. And most importantly, it just did the work, operating for nearly an hour without interruption.
On the weak side, you can see the fragility of current agents. LLMs can end up chasing their own tail or being stubborn, and you could see both at work. Even more importantly, while the AI was quite robust to many forms of error, it just took one (getting pricing wrong) to send it down a path that made it waste considerable time. Given that current agents aren’t fast or cheap, this is concerning. ...
The AI didn’t always check in regularly and could be hard to steer; it “wants” to be left alone to go and to do the work. Guiding agents will require radically different approaches to prompting¹, and they will require learning what they are best at. ...
See the full story here: https://open.substack.com/pub/oneusefulthing/p/when-you-give-a-claude-a-mouse?r=5xhla&utm_campaign=post&utm_medium=email
Sam Altman’s favorite AGI question
..."What do you hope society looks like when AGI gets built and how do you conceptualize the positive version of it?"... Sam Altman, OpenAI
See the full story here: https://www.msn.com/en-in/money/topstories/this-is-openai-ceo-sam-altman-s-favorite-question-about-agi/ar-AA1syu0O
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Trustworthy AI – A Market-Driven approach
- Tufts Alumni Bio