NEW YORK, Sept 18 (Askume) – U.S. lawmakers on Wednesday asked technology executives about their preparations to combat foreign disinformation threats ahead of the November election. Senators and officials agree that the 48 hours before and after election day are the most vulnerable.

    Microsoft (MSFT.O) President Brad Smith said: “There are potentially dangerous moments ahead. Today is 48 days before the election…I think the most dangerous moment will come 48 hours before the election. ” US Senate Intelligence Committee.

    The panel’s chairman, Senator Mark Warner, agreed with Smith, but said the 48 hours after polls close on November 5 “could be just as important or more critical,” especially as the election draws closer.

    Policy executives from Google (GOOGL.O) and Meta (META.O) , which own Facebook, Instagram and WhatsApp, also testified at the hearing.

    Several senators reported that Elon Musk’s predecessor was invited to testify but declined. A spokesperson for X said this was because the company’s invited witness and former head of global affairs, Nick Pickles, resigned earlier this month.

    TikTok was not invited to participate, according to a company spokesperson.

    To illustrate his concerns about the time before people vote, Smith cited a case from Slovakia’s 2023 election, where a recording allegedly of a party leader talking about electoral fraud was recently released and spread online. This recording is fake .

    Warner and other senators also pointed to tactics exposed earlier this month in a US crackdown on alleged Russian influence campaigns , which included creating fake websites made to look like real US news organizations, including Fox News and the Washington Post.

    Warner asked officials, “How did this spread? How do we know how widespread it was?” He gave the companies until next week to share data with the committee showing how many Americans viewed the content and how many ads were served.

    The widespread adoption of tags and watermarks by tech companies has emerged to counter the threat of new generative artificial intelligence technologies, which make it easier to create fake but realistic images, audio, and video and have raised concerns about their impact on elections.

    When asked how their companies would respond if such a deepfake image of a political candidate appeared before an election, both Smith and Meta’s president of global affairs Nick Clegg said their companies would label these materials.

    Metadata could also hinder its spread, Craig said.

    Categorized in:

    us, world,

    Last Update: September 19, 2024