Surgeons make history, perform world’s first fully robotic heart transplant
A heart team at King Faisal Specialist Hospital and Research Center (KFSHRC) in Riyadh, Saudi Arabia, made a bit of history, completing the world’s first fully robotic heart transplant.
The procedure, which lasted roughly two and half hours, was performed on a 16-year-old patient with end-stage heart failure. One reason this patient was selected was the fact that he had specifically requested the heart team not open his chest. ...
“This remarkable achievement would not have been possible without the unwavering support of our visionary leadership, who have prioritized the development of the healthcare sector, paving the way for a transformative leap in healthcare services, unlocking new possibilities to elevate the quality of life for patients both locally and globally,” he added.
The patient is now recovering, with no signs of significant complications. ...
See the full story here: https://cardiovascularbusiness.com/topics/clinical/cardiac-surgery/worlds-first-fully-robotic-heart-transplant?utm_source=newsletter&utm_medium=cvb_weekend
(UN) Artificial Intelligence Advisory Board’s Report ‘a Crucial Milestone in Efforts to Ensure AI Serves all of Humanity’, Says Secretary-General, in Video Message
...
This Advisory Body was the first of its kind in the AI space — a geographically diverse, gender-balanced group bringing together experts from Governments, the private sector, civil society and academia.
It was charged with a pressing question: how can AI be governed for humanity — particularly for those who are often under-represented and left out? Working at an impressive pace, the Advisory Body tackled its complex mandate with remarkable effectiveness.
As they share their final report, I commend the breadth of their recommendations, which include creating: An International Scientific Panel on AI — to promote common understanding on AI capabilities, opportunities and risks; a Global Dialogue on AI Governance at the UN — to anchor AI governance in international norms and principles, including human rights; a Global Fund on AI for the SDGs [Sustainable Development Goals] — to bridge the AI divide; an AI Capacity Development Network — to boost AI capacities and expertise, particularly in developing countries; a Standards Exchange — to foster technical compatibility; a Global Data Framework — to enable flourishing local AI ecosystems; and a small AI Office at the United Nations — to assist in all these initiatives. ...
See the full press release here: https://press.un.org/en/2024/sgsm22368.doc.htm
AI Companions (AIC): The Future of Personalized Relationships
Key Takeaways:
- AI Companions is a web3 platform that provides highly customizable and emotionally intelligent virtual companions to users looking to interact in the digital age.
- The platform combines AI, VR/AR and blockchain technologies to ensure users interact with virtual entities transparently and securely.
- The platform is powered by AIC tokens, which are used to pay fees and unlock various features via staking.
See the full story here: https://learn.bybit.com/en/web3/what-is-ai-companions-aic/
Meta Pushes for Reduced AI Regulations in Europe
The Breakdown: |
Concerns Over AI Regulations: The letter, signed by AI-focused companies and institutions, calls for the EU to remove red tape that hinders AI development. It argues that Europe is becoming less competitive due to inconsistent regulatory decisions, particularly concerning data usage. Impact on AI Rollout: Meta has faced delays in launching its AI chatbot in Europe due to EU requirements for user consent on data usage. This has led to region-specific provisions and setbacks, while other markets have had access to AI tools much earlier. Nick Clegg’s Criticism: Meta’s Head of Global Affairs, Nick Clegg, has voiced frustration over these regulations, claiming that the EU should focus on adopting technology faster, instead of slowing down progress through over-regulation. Corporate Pressure on the EU: The signatories, including Spotify and Ericsson, argue that the EU’s regulatory environment could cause Europe to miss out on the benefits of cutting-edge AI technologies. They urge for a harmonized approach similar to the GDPR to ensure that AI innovation happens at the same scale as in other regions. |
While businesses push for faster AI adoption, EU regulators must weigh the potential risks of loosening regulations. The debate highlights the challenge of balancing innovation with responsible governance in the rapidly evolving world of AI. |
Meta Pushes for Reduced AI Regulations in Europe |
Meta, alongside 48 other organizations, has signed an open letter urging the European Union (EU) to ease its stringent AI regulations. The letter warns that the region risks falling behind in the global AI race if it continues to impose restrictive policies, particularly around data usage. |
See the full story here: https://urldefense.com/v3/https://link.mail.beehiiv.com/ss/c/u001.fQ96O6y-x-LMArpopApPfRcYJscqGHOOyYpjgZ2TAmKHtHE72wGxhZfFHCtdtjk2o0ad8YAOXPLTSmrPD2eLpztJZJYKvjefZ2Hy43oLXT0/49x/1G1bQGKqQ8OJChevfizUkA/h2/h001._FiTdWLBHfVHGdAIBIatczI03KTdveRW1fX3HYppYM4;!!LIr3w8kk_Xxm!pXI36LgjAtPK5lG1Zt_df3WVNrZdsiMFEXSSSvZ5j7zmwd0EyQbiFjnZY9bK23PsH6hOMC3LODAWndkhcf4yaw$
Complementary Roles of Human and AI Endorsers in Advertising
This paper concludes that both human and AI-generated endorsers in advertising campaigns offer unique advantages and can be effective depending on the context. Human endorsers tend to foster emotional connections, authenticity, and trust, while AI-generated endorsers excel in personalization and tailoring content to consumer preferences. Rather than viewing them as competitors, the paper suggests that these two approaches should be seen as complementary. Future research is encouraged to explore the best ways to combine human and AI endorsers for enhanced advertising outcomes, considering different demographic segments and product types.
See the full paper here: https://journal.uhamka.ac.id/index.php/agregat/article/view/12491
US to convene global AI safety summit in November
... Commerce Secretary Gina Raimondo and Secretary of State Anthony Blinken will host on Nov. 20-21 the first meeting of the International Network of AI Safety Institutes in San Francisco to "advance global cooperation toward the safe, secure, and trustworthy development of artificial intelligence."
The network members include Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, Britain, and the United States. ...
The San Francisco meeting will include technical experts from each member’s AI safety institute, or equivalent government-backed scientific office, to discuss priority work areas, and advance global collaboration and knowledge sharing on AI safety. ...
See the full story here: https://www.reuters.com/technology/artificial-intelligence/us-convene-global-ai-safety-summit-november-2024-09-18/
Artificial intelligence laws in the US states are feeling the weight of corporate lobbying
... So far, there is limited evidence that states are following the EU’s lead when drafting their own AI legislation. There is strong evidence of lobbying of state legislators by the tech industry, which does not seem keen on adopting the EU’s rules, instead pressing for less stringent legislation that minimizes compliance costs but which, ultimately, is less protective of individuals. Two enacted bills in Colorado and Utah and two draft bills in Oklahoma and Connecticut, among others, illustrate this. ...
A major difference between the state bills and the AI Act is their scope. The AI Act takes a sweeping approach aimed at protecting fundamental rights and establishes a risk-based system, where some uses of AI, such as the ‘social scoring’ of people based on factors such as their family ties or education, are prohibited. ...
In contrast, the state bills are narrower. The Colorado legislation directly drew on the Connecticut bill, and both include a risk-based framework, but of a more limited scope than the AI Act. ...
Another explanation is the hesitancy embodied by Governor Lamont. In the absence of unified federal laws, states fear that strong legislation would cause a local tech exodus to states with weaker regulations, a risk less pronounced in data-protection legislation. ...
For these reasons, lobbying groups claim to prefer national, unified AI regulation over state-by-state fragmentation, a line that has been parroted by big tech companies in public. But in private, some advocate for light-touch, voluntary rules all round, showing their dislike of both state and national AI legislation. ...
See the full story here: https://www.nature.com/articles/d41586-024-02988-0
Here’s what I made of Snap’s new augmented-reality Spectacles
... These fifth-generation Spectacles can display visual information and applications directly on their see-through lenses, making objects appear as if they are in the real world. The interface is powered by the company’s new operating system, Snap OS. ...
In my demo, I was able to stack Lego pieces on a table, smack an AR golf ball into a hole across the room (at least a triple bogey), paint flowers and vines across the ceilings and walls using my hands, and ask questions about the objects I was looking at and receive answers from Snap’s virtual AI chatbot. There was even a little purple virtual doglike creature from Niantic, a Peridot, that followed me around the room and outside onto a balcony.
But look up from the table and you see a normal room. The golf ball is on the floor, not a virtual golf course. The Peridot perches on a real balcony railing. Crucially, this means you can maintain contact—including eye contact—with the people around you in the room. ...
To accomplish all this, Snap packed a lot of tech into the frames. There are two processors embedded inside, so all the compute happens in the glasses themselves. Cooling chambers in the sides did an effective job of dissipating heat in my demo. Four cameras capture the world around you, as well as the movement of your hands for gesture tracking. The images are displayed via micro-projectors, similar to those found in pico projectors, that do a nice job of presenting those three-dimensional images right in front of your eyes without requiring a lot of initial setup. ...
Snap isn’t selling the glasses directly to consumers but requires you to agree to at least one year of paying $99 per month for a Spectacles Developer Program account that gives you access to them. ...
Having said that, it all worked together impressively well. The three-dimensional objects maintained a sense of permanence in the spaces where you placed them—meaning you can move around and they stay put. The AI assistant correctly identified everything I asked it to. There were some glitches here and there ...
See the full story here: https://www.technologyreview.com/2024/09/17/1104025/snap-spectacles-ar-glasses/
Lionsgate signs deal to train AI model on its movies and shows
...
Today, Lionsgate — the studio behind films like the John Wick and Hunger Games franchises — announced that it is partnering with Runway to create a new customized video generation model intended to help “filmmakers, directors and other creative talent augment their work.”
In a statement about the deal, Lionsgate vice chair Michael Burns described it as a path toward creating “capital-efficient content creation opportunities” for the studio, which sees the technology as “a great tool for augmenting, enhancing and supplementing our current operations.” Burns also insisted that “several of our filmmakers are already excited about its potential applications to their pre-production and post-production process.” ...
See the full story here: https://www.theverge.com/2024/9/18/24248115/lionsgate-runway-ai-deal
California’s 5 AI-related bills
Do California's legislative efforts to regulate AI reflect a growing concern for digital ethics, personal rights, and democratic integrity, or are they legislative overreach that will stifle innovation? There are five key bills ready to be signed into law that address issues ranging from the use of digital replicas in contracts and posthumous rights of deceased personalities to the transparency and safety of AI platforms. I've read them. You should, too. The links and short descriptions are below.
AB 2602: Contracts: digital replicas - This bill mandates that contracts for personal or professional services involving digital replicas must clearly specify the intended uses of the replica. It also requires that individuals involved have access to legal counsel or labor union representation during contract negotiations in order to protect performers' rights in the digital age.
SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act - This bill establishes safety regulations for "covered AI models," defined by computational power and training costs. Developers of these models must implement safety measures, conduct regular audits, and report significant incidents to the California Department of Technology.
AB 2013: Artificial intelligence: transparency - This bill requires businesses that use generative AI systems to disclose any use of copyrighted materials in the training data. It also mandates that clear information about the AI system's capabilities and limitations be provided to users.
AB 1836: Deceased personalities: digital replicas - This bill prohibits the use of digital replicas of deceased personalities in audiovisual works without prior consent from their estate. It extends protections to ensure that digital replicas are not used posthumously without authorization, addressing concerns of exploitation.
AB 2655: Defending Democracy from Deepfake Deception Act of 2024 - This bill requires clear disclosure of AI-generated content in political advertisements and campaign materials. It also prohibits the distribution of deceptive audio or visual media of candidates that could mislead voters, aiming to protect electoral integrity.
See the full story here: https://shellypalmer.com/2024/09/california-vs-ai-a-battle-for-democracy-or-a-war-on-progress/
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio