Categories
Business Culture Digital - AI & Apps Lifestyle Programs & Events

AM Best to host IMCA/AM Best marketing leader lunch with AXIS’ Joe Cohen

OLDWICK, N.J. — (BUSINESS WIRE) — #insuranceAM Best will host a joint presentation with the Insurance Marketing & Communications Association (IMCA) on Friday, Dec. 8, 2023, at 12 p.m. EST.

 

In this live, interactive roundtable discussion, Joe Cohen, chief marketing and communications officer at AXIS, will discuss:

  • Rebranding strategies
  • Unlocking the value of corporate citizenship
  • Building a strong employer branding initiative
  • How marketers can help their organizations expand into new and adjacent markets
  • Ways to infuse diversity and inclusion into an organization’s culture
  • Uncovering the keys to success in digital marketing.

 

Panelists include:

  • Joe Cohen, chief marketing and communications officer, AXIS
  • Dave Evans, senior associate, Aartrijk, and former IMCA Board member
  • Lori Chordas, senior associate editor, AM Best TV

 

Attendees can submit questions during registration or by emailing webinars@ambest.com. The event will be streamed in video and audio formats, and playback will be available to registered viewers shortly after the event.

 

AM Best is a global credit rating agency, news publisher and data analytics provider specializing in the insurance industry. Headquartered in the United States, the company does business in over 100 countries with regional offices in London, Amsterdam, Dubai, Hong Kong, Singapore and Mexico City. For more information, visit www.ambest.com.

 

Copyright © 2023 by A.M. Best Company, Inc. and/or its affiliates.

ALL RIGHTS RESERVED.

Contacts

Lee McDonald
Senior Vice President, Publication & News Services
+1 908 882 2102
lee.mcdonald@ambest.com

Categories
Business Culture Digital - AI & Apps Lifestyle Perspectives Regulations & Security Technology

A US judge rejects X’s bid to overturn a May 2022 FTC order imposing restrictions on its data security practices and declines to stop a deposition of Elon Musk

—  SAN FRANCISCO — A federal judge on Thursday rejected an attempt by Elon Musk’s social media company to overturn a May 2022 order by the Federal Trade Commission that imposed requirements for safeguarding the personal data of its users.

 

A pile of characters removed from a sign on the Twitter headquarters building are seen in San Francisco, Monday, July 24, 2023, after Musk changed the name of the company to X. (Godofredo A. Vásquez/AP)

The company, then known as Twitter, had agreed to the order and a fine of $150 million after the FTC found that it asked for user phone numbers as a security mechanism but used them for marketing.

Musk bought the company later that year and renamed it X. By then, the FTC had launched a new investigation based on an explosive whistleblower complaint by former Twitter head of security Peiter Zatko, who said the company’s engineers had wide access to data with ineffective tracking.

Musk’s legal team asked U.S. Magistrate Judge Thomas Hixson to throw out the FTC order on the grounds that the agency had improperly increased its scrutiny after Musk took over and also pressured an outside assessor of the company’s security practices to find fault with them.

Hixson denied that motion after a hearing in San Francisco, ruling that the court was only involved in the underlying case for limited procedural reasons, such as the transfer of case documents to the Justice Department. He wrote that he lacked authority to set aside a consent order approved by an FTC administrative judge.

Hixson also declined to interfere in the FTC investigation by letting Musk avoid a deposition.

In his 11-page ruling, Hixson noted other problems with X’s argument. For example, the company had cited an Ernst & Young employee who said in his deposition that he felt the FTC expected him to find issues with X’s privacy program. But Hixson noted that the same employee said his work was delayed by the constant turnover in the executive ranks after Musk took charge and the lack of designated parties in charge of multiple aspects of the privacy program.

And while it is true the FTC increased its activity post-takeover, it had provided reasons for that, Hixson wrote.

“The government says this increase in investigative activity should not be surprising because Musk directed at least five rounds of terminations, layoffs or other reductions in X Corp.’s workforce, which affected the security, governance, risk and compliance team. The government argues that the FTC was concerned about X Corp.’s ability to comply with the Administrative Order given these significant changes to the company,” he ruled.

“As for deposing Musk, the government argues that the major changes to the company appear to have been initiated by Musk himself,” the judge said in declining to stop the deposition.

Joseph Menn / Washington Post

Techmeme

Categories
Digital - AI & Apps Lifestyle Science Technology

Meta unveils new AI tools to edit images and generate videos from text instructions, which uses its image generation model Emu

Mike Wheatley / SiliconANGLE:

 

 

—  Artificial intelligence researchers from Meta Platforms Inc. said they have made significant advances in AI-powered image and video generation.

 

The Facebook and Instagram parent has developed new tools that enable more control over the image editing process via text instructions, and a new method for text-to-video generation. The new tools are based on Meta’s Expressive Media Universe or Emu, the company’s first foundational model for image generation.

 

EMU was announced in September and today it’s being used in production, powering experiences such as Meta AI’s Imagine feature that allows users to generate photorealistic images in Messenger. In a blog post, Meta’s AI researchers explained that generative AI image generation is often a step-by-step process, where the user tries a prompt and the picture that’s generated isn’t quite what they had in mind. As a result, users are forced to keep tweaking the prompt until the image created is closer to what they had imagined.

 

Emu Edit for image editing

What Meta wants to do is to eliminate this process and give users more precise control, and that’s what its new Emu Edit tool is all about. It offers a novel approach to image manipulation, where the user simply inputs text-based instructions. It can perform local and global editing, adding or removing backgrounds, color and geometry transformations, object detection, segmentation and many more editing tasks.

 

“Current methods often lean toward either over-modifying or under-performing on various editing tasks,” the researchers wrote.

 

“We argue that the primary objective shouldn’t just be about producing a ‘believable’ image. Instead, the model should focus on precisely altering only the pixels relevant to the edit request.”

 

To that end, Emu Edit has been designed to follow the user’s instructions precisely to ensure that pixels unrelated to the request are untouched by the edit made. As an example, if a user wants to add the text “Aloha!” to a picture of a baseball cap, the cap itself should not be altered.

 

The researchers said incorporating computer vision into instructions for image generation models allows it to give users unprecedented control in image editing.

 

Emu Edit was trained on a dataset that contains 10 million synthesized samples, with each one including an input image, a description of the task to be performed and the targeted output image. The researchers believe this is the largest dataset of its kind ever created, allowing Emu Edit to deliver unrivaled results in terms of instruction faithfulness and image quality.

Emu Video for video generation

Meta’s AI team has also been focused on enhancing video generation. The researchers explained that the process of using generative AI to create videos is actually similar to image generation, only it involves bringing those images to life by bringing movement into the picture.

 

The Emu Video tool leverages the Emu model and provides a simple method for text-to-video generation that’s based on diffusion models. Meta said the tool can respond to various inputs, including text only, image only or both together.

 

The video generation process is split into a couple of steps, the first being to create an image conditioned by a text prompt, before creating a video based on that image and another text prompt. According to the team, this “factorized” approach offers an extremely efficient way to train video generation models.

 

“We show that factorized video generation can be implemented via a single diffusion model,” the researchers wrote. “We present critical design decisions, like adjusting noise schedules for video diffusion, and multi-stage training that allows us to directly generate higher-resolution videos.”

 

 

Meta said the advantage of this new approach is that it’s simpler to implement, using just a pair of diffusion models to whip up a 512-by-512 four-second video at 16 frames per second, compared with its older Make-A-Video tool, which uses five models. The company says human evaluations of this work reveal that it’s “strongly preferred” over its earlier work in image generation in terms of its overall quality and its faithfulness to the original text prompt.

 

Emu Video boasts other capabilities too, including the ability to animate user’s images based on simple text prompts, and once again it outperforms its earlier work.

 

For now, Meta’s research into generative AI image editing and video generation remains ongoing, but the team stressed there are a number of exciting use cases for the technology. For instance, it can enable users to create their own animated stickers and GIFs on the fly, rather than searching for existing ones that match the idea they’re trying to convert. It can also enable people to edit their own photographs without using complicated tools such as Photoshop.

 

The company added that its latest models are unlikely to replace professional artists and animators anytime soon. Instead, their potential lies in helping people to express themselves in new ways.

 

— Techmeme

Categories
Business Culture Digital - AI & Apps Entertainment News Lifestyle

Nielsen Streaming Top 10: ‘Fall of the House of Usher’ repeatedly bests ‘Suits’ in recent weeks

In its second showing on the Nielsen Streaming Top 10, “The Fall of the House of Usher” is already setting a strong show as it becomes the only title to consistently outperform “Suits” on the overall chart in back-to-back weeks since its Netflix debut.

 

During the viewing window, the Edgar Allen Poe-inspired series raked in an additional 1.5 billion minutes viewed during its first full week of availability, pushing it to No. 1 again with a 19% jump from its opening figure of 1.2 billion minutes.

 

“Suits” continued to remain in the No. 2 spot, but still maintained another week above the 1 billion minutes threshold with 1.06 billion minutes viewed across Netflix and Peacock — an impressive performance in its 18th consecutive week on the chart. To date, the series has accrued 45.445 billion minutes viewed. Landing just beneath is “Grey’s Anatomy,” which recorded 860 million minutes viewed.

Other newcomers this week include Bill Burr’s “Old Dads” which ranked sixth on the list with 728 million minutes viewed in its opening weekend. Also, “Bodies,” another Netflix title, managed 633 million minutes viewed leading it to No. 9 on the chart.

 

“Goosebumps” has been reclassified as an original title during this interval after having been previously listed on the acquired titles chart. With the updated classification, the title would have been the No. 5 Original last week when it recorded 553 million viewing minutes. But this week, it lands at No. 10 on the overall chart and No. 3 on the originals chart with 594 million minutes viewed.

 

Elsewhere on the list is “Bluey” (840 million), “NCIS” (811 million), “Gilmore Girls” (698 million), and “Cocomelon” (667 million).

 

See Nielsen’s list of overall streaming rankings for Oct. 16-22 first, followed by original streaming titles, acquired titles and then films.

 

 

 

Variety

Categories
Business Digital - AI & Apps Government Lifestyle Regulations & Security Science Technology

Port Authority NY NJ expands service and a new 15-device client emerges

MOUNTAIN VIEW, Calif. — (BUSINESS WIRE) — $KSCP #SecurityRobotKnightscope, Inc. [Nasdaq: KSCP] “(Knightscope” or the “Company),” a leading developer of autonomous security robots and blue light emergency communication systems, on Friday announces an expansion of services in New York and a new sale of 15 machines to a police department in California.

 

Port Authority New York New Jersey added the Knightscope Emergency Management System (KEMS) Professional service to monitor its 11 K1 Call Boxes on the George Washington Bridge. The KEMS platform allows clients and technicians to better understand the real-time health and status of deployed emergency communication devices. The cloud-based application monitors the system wide state-of-health, alerts users concerning operational issues, provides technicians real-time error detection/diagnostics, and collects/reports system performance statistics.

 

A police department in Southern California is purchasing 5 K1 Blue Light Towers to be installed in a new public park and 10 K1 Blue Light E-Phones in a new parking structure.

 

Knightscope’s Blue Light Towers, E-Phones and Call Boxes expand access to emergency communications for motorists, bicyclists and pedestrians utilizing the bridge as well as the park visitors by providing direct access to emergency services for people who may be experiencing danger, a crisis or some form of distress.

 

Learn More

Knightscope’s ASR services and industry leading emergency communications products help better protect the places people live, work, study and visit. To learn more about Knightscope’s Blue Light Emergency Communication Systems or Autonomous Security Robots – now with the option of Private LTE – book a discovery call or demonstration now at www.knightscope.com/discover.

 

About Knightscope

Knightscope is an advanced public safety technology company that builds fully autonomous security robots and blue light emergency communications systems that help protect the places people live, work, study and visit. Knightscope’s long-term ambition is to make the United States of America the safest country in the world. Learn more about us at www.knightscope.com. Follow Knightscope on Facebook, X (formerly Twitter), LinkedIn and Instagram.

 

Forward-Looking Statements

This press release may contain “forward-looking statements” about Knightscope’s future expectations, plans, outlook, projections and prospects. Such forward-looking statements can be identified by the use of words such as “should,” “may,” “intends,” “anticipates,” “believes,” “estimates,” “projects,” “forecasts,” “expects,” “plans,” “proposes” and similar expressions. Forward-looking statements contained in this press release and other communications include, but are not limited to, statements about the Company’s profitability and growth. Although Knightscope believes that the expectations reflected in these forward-looking statements are based on reasonable assumptions, there are a number of risks and uncertainties that could cause actual results to differ materially from such forward-looking statements. These risks and uncertainties include, among other things, the risk that the restructuring costs and charges may be greater than anticipated; the risk that the Company’s restructuring efforts may adversely affect the Company’s internal programs and the Company’s ability to recruit and retain skilled and motivated personnel, and may be distracting to employees and management; the risk that the Company’s restructuring efforts may negatively impact the Company’s business operations and reputation with or ability to serve customers; the risk that the Company’s restructuring efforts may not generate their intended benefits to the extent or as quickly as anticipated. Readers are urged to carefully review and consider any cautionary statements and other disclosures, including the statements made under the heading “Risk Factors” in Knightscope’s Annual Report on Form 10-K for the year ended December 31, 2022. Forward-looking statements speak only as of the date of the document in which they are contained, and Knightscope does not undertake any duty to update any forward-looking statements, except as may be required by law.

Contacts

Stacy Stephens
Knightscope, Inc.
(650) 924-1025

Categories
Culture Digital - AI & Apps Economics Lifestyle Technology

Bluesky plans to release a public web interface around the end of November and launch federation in early 2024, and says it crossed 2M users

Sarah Perez / TechCrunch:

 

 

—  Bluesky, the company building a decentralized alternative to Twitter/X, announced Thursday,  it has hit 2 million users — up by another million since September, despite remaining an invite-only app.

 

It also revealed its timeframe regarding other key goals, indicating that it planned to have a public web interface go live by the end of the month and would launch federation by early next year.

 

The latter is one of the most important differentiating factors between Bluesky and X, as it would allow Bluesky to function as a more open social network.

 

This means it will work more like Mastodon where users can pick and choose which servers to join and move their accounts around at will. This is what Bluesky today says makes it “billionaire-proof” —  a swipe at Elon Musk’s ownership of Twitter, now called X.

 

“You’ll always have the freedom to choose (and to exit) instead of being held to the whims of private companies or black box algorithms,” a company blog post explained.

 

“And wherever you go, your friends and relationships will be there too,” it noted.

 

Similar to the decentralized service Mastodon, federation would anyone to run their own service and connect to any other service also running the same protocol. In Bluesky’s case, this would be done via the AT Protocol that the company is also developing alongside its consumer-facing service and mobile app. However, the other major decentralized social network, Mastodon, uses a well-established protocol, ActivityPub, which has gained greater traction in the months since Musk’s Twitter acquisition.

 

Since then, other companies including Mozilla, Flipboard, Medium, and Automattic (WordPress.com’s parent) have embraced ActivityPub and Mastodon. That could pose a challenge with regard to Bluesky’s eventual reach unless it makes a move to allow the AT Protocol and ActivityPub to somehow interoperate. Bridging the two may be technically possible, but it’s likely something that would take place further down the road, rather than in the near term.

 

In the meantime, Bluesky is working to make its own service more accessible, which includes launching a public web interface later this month. This will allow anyone to view the posts on Bluesky, even if they don’t have an account. That could make the network more promising in terms of being a true X rival for breaking news and conversations but could also expose Bluesky users’ posts to the outside world in ways they’re unprepared for. (The app doesn’t currently offer an option to set profiles to “private,” as Twitter/X does. Some users are not happyabout this).

 

Despite its growth, Bluesky’s hesitancy to drop its invite-only status and open its network to more users has allowed other X competitors to gain a foothold. Last month, for example, Meta CEO Mark Zuckerberg announced his X alternative, Instagram Threads, has just under 100 million monthly active users. He believes it could reach a billion users in the next few years. And Threads intends to interoperate with ActivityPub in the future.

 

Bluesky’s announcement follows Threads’ rapid-fire release of features to make its app more competitive with X, including things like a chronological feed, support for viewing your likes, search, a (free) edit button, a web version, polls and GIF support, topic tags, and soon, a developer API. Mastodon has also capitalized on the opportunity Twitter’s acquisition presented, and launched an easier-to-use version of its service this September. But Mastodn currently has 1.6 million monthly active users, making it still much smaller than Threads.

 

Alongside today’s news, Bluesky also noted other recently launched features, including mobile push notifications, shareable user lists, email verification, advanced feed and thread preferences for sorting and filtering posts, a media tab on user profiles, a Likes tab on your own user profile, suggested follows, and various accessibility improvements.

 

Though Bluesky began its life as a Twitter project under Jack Dorsey, the company was spun out from Twitter with $13 million to get started on R&D. Dorsey sits on its board. This year, the company raised an $8 million seed round led by Neo to further its development, and converted from being a public benefit LLC to a public benefit C Corp.

 

 

Techmeme

Categories
Business Digital - AI & Apps Regulations & Security Science Technology

GitGuardian: Nearly 3K of the 450K projects submitted to PyPI exposed at least one credential in code, like API keys, including some from ‘very large companies’

—  Many transgressions come from “very large companies that have robust security teams.”

 

 

Dan Goodin / Ars Technica:

 

Despite more than a decade of reminding, prodding, and downright nagging, a surprising number of developers still can’t bring themselves to keep their code free of credentials that provide the keys to their kingdoms, to anyone who takes the time to look for them.

 

The lapse stems from immature coding practices in which developers embed cryptographic keys, security tokens, passwords, and other forms of credentials directly into the source code they write. The credentials make it easy for the underlying program to access databases or cloud services necessary for it to work as intended. I published one such PSA in 2013 after discovering simple searches that turned up dozens of accounts that appeared to expose credentials securing computer-to-server SSH accounts. One of the credentials appeared to grant access to an account on Chromium.org, the repository that stores the source code for Google’s open source browser.

 

In 2015, Uber learned the hard way just how damaging the practice can be. One or more developers for the ride service had embedded a unique security key into code and then shared that code on a public GitHub page. Hackers then copied the key and used it to access an internal Uber database and, from there, steal sensitive data belonging to 50,000 Uber drivers.

 

The credentials exposed provided access to a range of resources, including Microsoft Active Directory servers that provision and manage accounts in enterprise networks, OAuth servers allowing single sign-on, SSH servers, and third-party services for customer communications and cryptocurrencies. Examples included:

  • Azure Active Directory API Keys
  • GitHub OAuth App Keys
  • Database credentials for providers such as MongoDB, MySQL, and PostgreSQL
  • Dropbox Key
  • Auth0 Keys
  • SSH Credentials
  • Coinbase Credentials
  • Twilio Master Credentials.

 

Also included in the haul were API keys for interacting with various Google Cloud services, database credentials, and tokens controlling Telegram bots, which automate processes on the messenger service. This week’s report said that exposures in all three categories have steadily increased in the past year or two.

 

The secrets were exposed in various types of files published to PyPI. They included primary .py files, README files, and test folders.

Enlarge / Most common types of files other than .py containing a hardcoded secret in PyPI packages.

 

GitGuardian tested the exposed credentials and found that 768 remained active. The risk, however, can extend well beyond that smaller number. GitGuardian explained:

 

It is important to note that just because a credential can not be validated does not mean it should be considered invalid. Only once a secret has been properly rotated can you know if it is invalid. Some types of secrets GitGuardian is still working toward automatically validating include Hashicorp Vault Tokens, Splunk Authentication Tokens, Kubernetes Cluster Credentials, and Okta Tokens.

 

There are no good reasons to expose credentials in code. The report said the most common cause is by accident.

 

“In the course of outreach for this project, we discovered at least 15 incidents where the publisher was unaware they had made their project public,” the authors wrote. “Without naming any names, we did want to mention some of these were from very large companies that have robust security teams. Accidents can happen to anyone.”

 

Over the past decade, various mechanisms have become available for allowing code to securely access databases and cloud resources. One is .env files that are stored in private environments outside of the publicly available code repository. Others are tools such as the AWS Secrets Manager, Google Cloud’s Secret Manager, or the Azure Key Vault. Developers can also employ scanners that check code for credentials inadvertently included.

 

The study examined PyPI, which is just one of many open source repositories. In years past, code hosted in other repositories such as NPM and RubyGems has also been rife with credential exposure, and there’s no reason to suspect the practice doesn’t continue in them now.

 

 

Techmeme

Categories
Business Culture Digital - AI & Apps International & World Lifestyle Perspectives Regulations & Security Science Technology

Companies that provide Microsoft, Google, and others with AI data-labeling services often hire minors, which can be traumatic

 

Niamh Rowe / Wired:

 

 

Like most kids his age, 15-year-old Hassan spent a lot of time online. Before the pandemic, he liked playing football with local kids in his hometown of Burewala in the Punjab region of Pakistan. But Covid lockdowns made him something of a recluse, attached to his mobile phone.

 

“I just got out of my room when I had to eat something,” says Hassan, now 18, who asked to be identified under a pseudonym because he was afraid of legal action.

 

But unlike most teenagers, he wasn’t scrolling TikTok or gaming. From his childhood bedroom, the high schooler was working in the global artificial intelligence supply chain, uploading and labeling data to train algorithms for some of the world’s largest AI companies.

 

The raw data used to train machine-learning algorithms is first labeled by humans, and human verification is also needed to evaluate their accuracy. This data-labeling ranges from the simple—identifying images of street lamps, say, or comparing similar ecommerce products—to the deeply complex, such as content moderation, where workers classify harmful content within data scraped from all corners of the internet. These tasks are often outsourced to gig workers, via online crowdsourcing platforms such as Toloka, which was where Hassan started his career.

 

A friend put him on to the site, which promised work anytime, from anywhere. He found that an hour’s labor would earn him around $1 to $2, he says, more than the national minimum wage, which was about $0.26 at the time. His mother is a homemaker, and his dad is a mechanical laborer.

 

“You can say I belong to a poor family,” he says.

 

When the pandemic hit, he needed work more than ever. Confined to his home, online and restless, he did some digging, and found that Toloka was just the tip of the iceberg.

“AI is presented as a magical box that can do everything,” says Saiph Savage, director of Northeastern University’s Civic AI Lab.

 

“People just simply don’t know that there are human workers behind the scenes.”

 

At least some of those human workers are children. Platforms require that workers be over 18, but Hassan simply entered a relative’s details and used a corresponding payment method to bypass the checks—and he wasn’t alone in doing so. WIRED spoke to three other workers in Pakistan and Kenya who said they had also joined platforms as minors, and found evidence that the practice is widespread.

“When I was still in secondary school, so many teens discussed online jobs and how they joined using their parents’ ID,” says one worker who joined Appen at 16 in Kenya, who asked to remain anonymous.

 

After school, he and his friends would log on to complete annotation tasks late into the night, often for eight hours or more.

 

Read more here:

Companies that provide Microsoft, Google, and others with AI data-labeling services are inadvertently hiring minors, often exposing them to traumatic content

 

 

 

Techmeme

Categories
Culture Digital - AI & Apps For Edit International & World Regulations & Security Technology

Experts: Terrorists utilize generative AI tools to evade hashing algorithms techs use to remove extremist content

—  Experts are finding thousands of examples of AI-created content every week that could allow terrorist groups and other violent extremists to bypass automated detection systems.

 

David Gilbert / Wired:

 

 

EXTREMIST GROUPS HAVE begun to experiment with artificial intelligence, and in particular generative AI, in order to create a flood of new propaganda. Experts now fear the growing use of generative AI tools by these groups will overturn the work Big Tech has done in recent years to keep their content off the internet.

 

 

“Our biggest concern is that if terrorists start using gen AI to manipulate imagery at scale, this could well destroy hash-sharing as a solution,” Adam Hadley, the executive director of Tech Against Terrorism, tells WIRED. “This is a massive risk.”

 

For years, Big Tech platforms have worked hard to create databases of known violent extremist content, known as hashing databases, which are shared across platforms to quickly and automatically remove such content from the internet. But according to Hadley, his colleagues are now picking up around 5,000 examples of AI-generated content each week. This includes images shared in recent weeks by groups linked to Hezbollah and Hamas that appear designed to influence the narrative around the Israel-Hamas war.

 

“Give it six months or so, the possibility that [they] are manipulating imagery to break hashing is really concerning,” Hadley says. “The tech sector has done so well to build automated technology, terrorists could well start using gen AI to evade what’s already been done.”

 

Other examples that researchers at Tech Against Terrorism have uncovered in recent months have included a neo-Nazi messaging channel sharing AI-generated imagery created using racist and antisemitic prompts pasted into an app available on the Google Play store; far-right figures producing a “guide to memetic warfare” advising others on how to use AI-generated image tools to create extremist memes; the Islamic State publishing a tech support guide on how to securely use generative AI tools; a pro-IS user of an archiving service claiming to have used an AI-based automatic speech recognition (ASR) system to transcribe Arabic language IS propaganda; and a pro-al-Qaeda outlet publishing several posters with images highly likely to have been created using a generative AI platform.

 

Beyond detailing the threat posed by generative AI tools that can tweak images, Tech Against Terrorism has published a new report citing other ways in which gen AI tools can be used to help extremist groups. These include the use of autotranslation tools that can quickly and easily convert propaganda into multiple languages, or the ability to create personalized messages at scale to facilitate recruitment efforts online. But Hadley believes that AI also provides an opportunity to get ahead of extremist groups and use the technology to preempt what they will use it for.

 

“We’re going to partner with Microsoft to figure out if there are ways using our archive of material to create a sort of gen AI detection system in order to counter the emerging threat that gen AI will be used for terrorist content at scale,” Hadley says. “We’re confident that gen AI can be used to defend against hostile uses of gen AI.”

The partnership was announced today, on the eve of the Christchurch Call Leaders’ Summit, a movement designed to eradicate terrorism and extremist content from the internet, to be held in Paris.

“The use of digital platforms to spread violent extremist content is an urgent issue with real-world consequences,” Brad Smith, vice chair and president at Microsoft said in a statement. “By combining Tech Against Terrorism’s capabilities with AI, we hope to help create a safer world both online and off.”

 

While companies like Microsoft, Google, and Facebook all have their own AI research divisions and are likely already deploying their own resources to combat this issue, the new initiative will ultimately aid those companies that can’t combat these efforts on their own.

“This will be particularly important for smaller platforms that don’t have their own AI research centers,” Hadley says. “Even now, with the hashing databases, smaller platforms can just become overwhelmed by this content.”

 

The threat of AI generative content is not limited to extremist groups. Last month, the Internet Watch Foundation, a UK-based nonprofit that works to eradicate child exploitation content from the internet, published a report that detailed the growing presence of child sexual abuse material (CSAM) created by AI tools on the dark web.

 

The researchers found over 20,000 AI-generated images posted to one dark web CSAM forum over the course of just one month, with 11,108 of these images judged most likely to be criminal by the IWF researchers. As the IWF researchers wrote in their report, “These AI images can be so convincing that they are indistinguishable from real images.”

 

 

 

Techmeme

Categories
Business Culture Digital - AI & Apps Education Lifestyle Regulations & Security Science Technology

Growing cybersecurity demand opportunity to create more racially inclusive workforce

Cybersecurity experts are in demand, but the current workforce contains too few African Americans and other minorities. Creating a more racially inclusive workforce benefits everyone and is vital to better identify technological risks and vulnerabilities. David Lee, founder of The Identity Jedi, explains how the industry can improve representation and the benefits that will offer.

 

 

MARIETTA, Ga.  — African Americans make up only 9.2% of cybersecurity analysts.1 This lack of representation is not merely unfortunate; it presents multiple issues in terms of creating effective security protocols and addressing critical vulnerabilities.

 

David Lee, and expert on identity access management and founder of The Identity Jedi, explains, “We live in a world where tech is intertwined in everything we do. In order to create products that serve all people, we need to make sure that we have all people building those products.”

 

The value behind bridging the diversity gap

Diversity, equity and inclusion (DEI) has become a hot topic in all industries, with research indicating how these initiatives can improve a company’s performance and outcomes. Companies that reported above-average diversity on their management teams also reported innovation revenue that was 19% higher than that of companies with below-average leadership diversity—45% of total revenue versus just 26%.2

 

Other benefits to DEI for organizations include greater cash flow and the ability to capture new markets as well as increased profitability. Studies have found 2.3 times higher cash flow than those of companies with more monolithic staff.3 Diverse organizations are also 70% more likely to capture new markets than companies that do not incorporate under-represented groups in their recruitment processes.Finally, a 1% increase in racial diversity similarity between upper and lower management increases firm productivity by between $729 and $1,590 per employee per year.5

 

The first step in bridging the diversity gap is to develop representation. As Lee says, “There are a million ways to accomplish something in tech. The more diverse perspective you have, the stronger product you get. Tech is used by everyone, so it should be created by a representation of everyone.”
He challenges organizations to:

  • Actually want to solve the problem. If an organization doesn’t really care, then it will show in their efforts and results.
  • Talk to current Black employees and create a safe space for them to talk about their experience.
  • Engage with the local community, from historically Black colleges and universities (HBCUs) to Black tech organizations such as Cyversity, NSBE and ACM, among others, and connect talent departments to these pipelines.

 

Become intentional about diversity, equity and inclusion

Lee finds that having an intentional DE&I program helps bring awareness and representation to the field of cybersecurity. “Organizations need to engage with HBCU’s,” he advises “Connect with their computer science and engineering departments to host events and provide pathways to connect with candidates.”

 

Other ways that cybersecurity companies or any organization can boost their employee diversity include creating a safe space for workers to connect with their co-workers and embrace their culture via employee resource groups. Offering leadership training on bias that includes open and intentional conversations with company leaders about bias can also be beneficial.

 

Lee’s firsthand experiences as a Black professional within the technology industry inspired him to write “The Only One in the Room: The Unwritten Laws of Being Black in Tech.” The book shares the challenges he and other African Americans have faced in that sector, in addition to drawing attention to the importance of representation and diversity in reshaping the industry.

 

About The Identity Jedi

David Lee transitioned from a software engineering background to become a harbinger of change and inclusivity in the tech world. With over two decades of experience, he has left his mark on government agencies, Fortune 500 companies, and numerous fields, specializing in identity and access management. Recognizing that for technology to truly transform the world, it must embrace diversity, David serves as an agent of transformation, inspiring individuals to unlock their full potential. His influential voice and actionable insights have solidified his reputation as a respected figure in the ever-evolving tech landscape. When he speaks people listen. He is The Identity Jedi. www.theidentityjedi.com

 

References:

  1. “Cyber Security Analyst demographics and statistics in the US”; Zippia; Accessed October 26, 2023; zippia.com/cyber-security-analyst-jobs/demographics/.
  2. Rocío Lorenzo, Nicole Voigt, Miki Tsusaka, Matt Krentz, and Katie Abouzahr; “How Diverse Leadership Teams Boost Innovation”; January 23, 2018; Boston Consulting Group; bcg.com/publications/2018/how-diverse-leadership-teams-boost-innovation
  3. Reiners, Bailey; “50 Diversity in the Workplace Statistics to Know”; Built in; updated March 28, 2023; builtin.com/diversity-inclusion/diversity-in-the-workplace-statistics#0.
  4. Sylvia Ann Hewlett, Melinda Marshall, Laura Sherbin; “How Diversity Can Drive Innovation”; Harvard Business Review; December 2013; hbr.org/2013/12/how-diversity-can-drive-innovation.
  5. Lauren Turner, Maya Fischhoff; “How Diversity Increases Productivity”; Network for Business Sustainability; January 19, 2021; nbs.net/how-diversity-increases-productivity/.

 

– jotopr.com