Stuff South Africa https://stuff.co.za South Africa's Technology News Hub Wed, 10 Apr 2024 09:43:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 Stuff South Africa South Africa's Technology News Hub clean Google plans to charge for AI-powered search features https://stuff.co.za/2024/04/08/google-plans-to-charge-for-ai-search/ https://stuff.co.za/2024/04/08/google-plans-to-charge-for-ai-search/#respond Mon, 08 Apr 2024 08:53:34 +0000 https://stuff.co.za/?p=191485 You didn’t think that artificial intelligence would remain (mostly) free forever, did you? A new report claims that Google is looking at ways to begin charging users for AI-powered search and other associated features. That would mark quite a departure from how the company does things. It’s only recently taken the choice to charge users for access to YouTube (if you’d like to avoid ads and have other features as well) so this is a big step.

The report, via the Financial Times, reckons that the choice is being made in response to the expense of developing and maintaining AI systems within the company. It’s not an unusual call to make. Users keen on the latest features from OpenAI are expected to hand over a monthly premium for the same reason.

Smart, Google

The company feels that its new search features, reportedly a single response to search queries in a similar manner to that offered by ChatGPT and other AIs, are worth charging for. Other features will probably also fall under this banner as they are developed. The main question is how to charge users.

It’s not immediately clear whether Google will create an all-new subscription service for users. It’s more likely that folks who are already giving the company money will gain access to enhanced artificial intelligence assistance for search queries — if that’s what they want. A small subset of users are already testing new features but others will have to sign up for access when it becomes more broadly available.


Read More: Google Gemini replaces Bard as catch-all AI platform


It’s a slick idea and one that might drive adoption of existing paid-for Google products. AI is increasingly being used to tempt users over to a particular brand rather than acting as a standalone product. Microsoft’s Edge browser, Adobe’s services, and many others are touting AI integration. Even Mercedes is bragging about ChatGPT integration. The search giant probably will have little issue putting this idea over with its users.

Source

]]>
https://stuff.co.za/2024/04/08/google-plans-to-charge-for-ai-search/feed/ 0
Instagram and Threads are limiting political content. This is terrible for democracy https://stuff.co.za/2024/04/08/instagram-thread-limiting-political-content/ https://stuff.co.za/2024/04/08/instagram-thread-limiting-political-content/#respond Mon, 08 Apr 2024 07:17:25 +0000 https://stuff.co.za/?p=191514 Meta’s Instagram and Threads apps are “slowly” rolling out a change that will no longer recommend political content by default. The company defines political content broadly as being “potentially related to things like laws, elections, or social topics”.

Users who follow accounts that post political content will still see such content in the normal, algorithmically sorted ways. But by default, users will not see any political content in their feeds, stories or other places where new content is recommended to them.

For users who want political recommendations to remain, Instagram has a new setting where users can turn it back on, making this an “opt-in” feature.

This change not only signals Meta’s retreat from politics and news more broadly, but also challenges any sense of these platforms being good for democracy at all. It’s also likely to have a chilling effect, stopping content creators from engaging politically altogether.

Politics: dislike

Meta has long had a problem with politics, but that wasn’t always the case.

In 2008 and 2012, political campaigning embraced social media, and Facebook was seen as especially important in Barack Obama’s success. The Arab Spring was painted as a social-media-led “Facebook Revolution”, although Facebook’s role in these events was widely overstated,

However, since then the spectre of political manipulation in the wake of the 2018 Cambridge Analytica scandal has soured social media users toward politics on platforms.

 

Increasingly polarised politics, vastly increased mis- and disinformation online, and Donald Trump’s preference for social media over policy, or truth, have all taken a toll. In that context, Meta has already been reducing political content recommendations on their main Facebook platform since 2021.

Instagram and Threads hadn’t been limited in the same way, but also ran into problems. Most recently, the Human Rights Watch accused Instagram in December last year of systematically censoring pro-Palestinian content. With the new content recommendation change, Meta’s response to that accusation today would likely be that it is applying its political content policies consistently.

How the change will play out in Australia

Notably, many Australians, especially in younger age groups, find news on Instagram and other social media platforms. Sometimes they are specifically seeking out news, but often not.

Not all news is political. But now, on Instagram by default no news recommendations will be political. The serendipity of discovering political stories that motivate people to think or act will be lost.

Combined with Meta recently stating they will no longer pay to support the Australian news and journalism shared on their platforms, it’s fair to say Meta is seeking to be as apolitical as possible.

The social media landscape is fracturing

With Elon Musk’s disastrous Twitter rebranding to X, and TikTok facing the possibility of being banned altogether in the United States, Meta appears as the most stable of the big social media giants.

But with Meta positioning Threads as a potential new town square while Twitter/X burns down, it’s hard to see what a town square looks like without politics.

The lack of political news, combined with a lack of any news on Facebook, may well mean young people see even less news than before, and have less chance to engage politically.

In a Threads discussion, Instagram Head Adam Mosseri made the platform’s position clear:

Politics and hard news are important, I don’t want to imply otherwise. But my take is, from a platform’s perspective, any incremental engagement or revenue they might drive is not at all worth the scrutiny, negativity (let’s be honest), or integrity risks that come along with them.

Like for Facebook, for Instagram and Threads politics is just too hard. The political process and democracy can be pretty hard, but it’s now clear that’s not Meta’s problem.

A chilling effect on creators

Instagram’s announcement also reminded content creators their accounts may no longer be recommended due to posting political content.

If political posts were preventing recommendation, creators could see the exact posts and choose to remove them. Content creators live or die by the platform’s recommendations, so the implication is clear: avoid politics.


Read More: You can now delete your Threads account without killing your Instagram account – here’s how


Creators already spend considerable time trying to interpret what content platforms prefer, building algorithmic folklore about which posts do best.

While that folklore is sometimes flawed, Meta couldn’t be clearer on this one: political posts will prevent audience growth, and thus make an already precarious living harder. That’s the definition of a political chilling effect.

For the audiences who turn to creators, because they are perceived to be relatable and authentic, the absence of political posts or positions will likely stifle political issues, discussion and thus ultimately democracy.

 

View this post on Instagram

 

A post shared by matt bernstein (@mattxiv)

How do I opt back in?

For Instagram and Threads users who want these platforms to still share political content recommendations, follow these steps:

  • go to your Instagram profile and click the three lines to access your settings.
  • click on Suggested Content (or Content Preferences for some).
  • click on Political content, and then select “Don’t limit political content from people that you don’t follow”.

]]>
https://stuff.co.za/2024/04/08/instagram-thread-limiting-political-content/feed/ 0
Rain is hiking prices at the beginning of June 2024 https://stuff.co.za/2024/04/05/rain-hiking-prices-beginning-of-june-2024/ https://stuff.co.za/2024/04/05/rain-hiking-prices-beginning-of-june-2024/#respond Fri, 05 Apr 2024 07:33:24 +0000 https://stuff.co.za/?p=191398 Oh, Rain. You know we love you. But that becomes a little more difficult when you announce a price hike across your entire spectrum of products. We won’t stop loving you, of course, but was there no way we could have done this, say, next year? Or maybe never?

According to the company, no. In an email sent to Rain customers yesterday, the company is placing the blame at the feet of inflation. Don’t get too down. Rather than add to the pile of increases hitting South African wallets this April — DStv, Eskom, Vodacom, and petrol, to name a few — Rain will wait to purge your wallet of the extra funds until 1 June 2024. How kind.

(Don’t) Bless the Rains

rain pricing update intext

Rain doesn’t appear to have announced the pricing update publically, limiting the news to customers through email and notifications in customers’ myrain portion of the website. We have since contacted Rain’s support team to ascertain whether the increase is limited to rainOne packages, or if this was an ‘everyone is screwed’ situation.

Unfortunately, the latter is true. We’ve compiled a list of all the affected packages and their new prices that will come into effect from 1 June 2024 onwards:

  • 4G UOP – R285 (R20 increase)
  • 4G Phone Unlimited – R355 (R26 increase)
  • 4G Unlimited – R535 (R36 increase)
  • rainOne 4G Unlimited – R535 (R36 increase)
  • rainOne 4G UOP – R285 (R20 increase)
  • 5G Basic – R569 (R40 increase)
  • 5G Basic Boost – R569 (R40 increase)
  • 5G Standard – R795 (R56 increase)
  • 5G Premium – R995 (R4 decrease)
  • rainOne 30Mbps – R595 (R36 increase)
  • rainOne 60Mbps – R795 (R36 increase)
  • rainOne 100Mbps – R995 (R36 increase)
  • rainOne 30Mbps Legacy upgrade – R569 (R40 increase)
  • rainOne 60Mbps Legacy upgrade – R795 (R56 increase)
  • rainOne 100Mbps Legacy upgrade – R995 (R36 increase)
]]>
https://stuff.co.za/2024/04/05/rain-hiking-prices-beginning-of-june-2024/feed/ 0
An anonymous coder nearly hacked a big chunk of the internet. How worried should we be? https://stuff.co.za/2024/04/04/a-coder-nearly-hacked-a-chunk-of-internet/ https://stuff.co.za/2024/04/04/a-coder-nearly-hacked-a-chunk-of-internet/#respond Thu, 04 Apr 2024 07:15:45 +0000 https://stuff.co.za/?p=191366 Outside the world of open-source software, it’s likely few people would have heard about XZ Utils, a small but widely used tool for data compression in Linux systems. But late last week, security experts uncovered a serious and deliberate flaw that could leave networked Linux computers susceptible to malicious attacks.

The flaw has since been confirmed as a critical issue that could allow a knowledgeable hacker to gain control over vulnerable Linux systems. Because Linux is used throughout the world in email and web servers and application platforms, this vulnerability could have given the attacker silent access to vital information held on computers throughout the world – potentially including the device you’re using right now to read this.

Major software vulnerabilities, such as the SolarWinds hack and the Heartbleed bug, are nothing new – but this one is very different.

The XZ Utils hack attempt took advantage of the way open-source software development often works. Like many open-source projects, XZ Utils is a crucial and widely used tool – and it is maintained largely by a single volunteer, working in their spare time. This system has created huge benefits for the world in the form of free software, but it also carries unique risks.

Open source and XZ Utils

First of all, a brief refresher on open-source software. Most commercial software, such as the Windows operating system or the Instagram app, is “closed-source” – which means nobody except its creators can read or modify the source code. By contrast, with “open-source” software, the source code is openly available and people are free to do what they like with it.

Open-source software is very common, particularly in the “nuts and bolts” of software which consumers don’t see, and is hugely valuable. One recent study estimated the total value of open-source software in use today at US$8.8 trillion.

Until around two years ago, the XZ Utils project was maintained by a developer called Lasse Collin. Around that time, an account using the name Jia Tan submitted an improvement to the software.

Not long after, some previously unknown accounts popped up to report bugs and submit feature requests to Collin, putting pressure on him to take on a helper in maintaining the project. Jia Tan was the logical candidate.

Over the next two years, Jia Tan became more and more involved and, we now know, introduced a carefully hidden weapon into the software’s source code.

The revised code secretly alters another piece of software, a ubiquitous network security tool called OpenSSH, so that it passes malicious code to a target system. As a result, a specific intruder will be able to run any code they like on the target machine.

The latest version of XZ Utils, containing the backdoor, was set to be included in popular Linux distributions and rolled out across the world. However, it was caught just in time when a Microsoft engineer investigated some minor memory irregularities on his system.

A rapid response

What does this incident mean for open-source software? Well, despite initial appearances, it doesn’t mean open-source software is insecure, unreliable or untrustworthy.

Because all the code is available for public scrutiny, developers around the world could rapidly begin analysing the backdoor and the history of how it was implemented. These efforts could be documented, distributed and shared, and the specific malicious code fragments could be identified and removed.

A response on this scale would not have been possible with closed-source software.

An attacker would need to take a somewhat different approach to target a closed-source tool, perhaps by posing as a company employee for a long period and exploiting the weaknesses of the closed-source software production system (such as bureaucracy, hierarchy, unclear reporting lines and poor knowledge sharing).

However, if they did achieve such a backdoor in proprietary software, there would be no chance of large-scale, distributed code auditing.

Lessons to be learned

This case is a valuable opportunity to learn about weaknesses and vulnerabilities of a different sort.

First, it demonstrates the ease with which online relations between anonymous users and developers can become toxic. In fact, the attack depended on the normalisation of these toxic interactions.

The social engineering part of the attack appears to have used anonymous “sockpuppet” accounts to guilt-trip and emotionally coerce the lead maintainer into accepting minor, seemingly innocuous code additions over a period of years, pressuring them to cede development control to Jia Tan.

One user account complained:

You ignore the many patches bit rotting away on this mailing list. Right now you choke your repo.

When the developer professed mental health issues, another account chided:

I am sorry about your mental health issues, but its important to be aware of your own limits.

Individually such comments might appear innocuous, but in concert become a mob.

We need to help developers and maintainers better understand the human aspects of coding, and the social relationships that affect, underpin or dictate how distributed code is produced. There is much work to be done, particularly to improve the recognition of the importance of mental health.

A second lesson is the importance of recognising “obfuscation”, a process often used by hackers to make software code and processes difficult to understand or reverse-engineer. Many universities do not teach this as part of a standard software engineering course.


Read More: Cybercriminals are creating their own AI chatbots to support hacking and scam users


Third, some systems may still be running the dangerous versions of XZ Utils. Many popular smart devices (such as refrigerators, wearables and home automation tools) run on Linux. These devices often reach an age at which it is no longer financially viable for their manufacturers to update their software – meaning they do not receive patches for newly discovered security holes.

And finally, whoever is behind the attack – some have speculated it may be a state actor – has had free access to a variety of codebases over a two-year period, perpetrating a careful and patient deception. Even now, that adversary will be learning from how system administrators, Linux distribution producers and codebase maintainers are reacting to the attack.

Where to from here?

Code maintainers around the world are now thinking about their vulnerabilities at a strategic and tactical level. It is not only their code itself they will be worrying about, but also their code distribution mechanisms and software assembly processes.

My colleague David Lacey, who runs the not-for-profit cybersecurity organisation IDCARE, often reminds me the situation facing cybersecurity professionals is well articulated by a statement from the IRA. In the wake of their unsuccessful bombing of the Brighton Grand Hotel in 1984, the terrorist organisation chillingly claimed:

“Today we were unlucky, but remember we only have to be lucky once. You will have to be lucky always.”


]]>
https://stuff.co.za/2024/04/04/a-coder-nearly-hacked-a-chunk-of-internet/feed/ 0
Google offers to destroy Incognito data in bid to settle class-action lawsuit https://stuff.co.za/2024/04/02/google-offers-to-destroy-incognito-data/ https://stuff.co.za/2024/04/02/google-offers-to-destroy-incognito-data/#respond Tue, 02 Apr 2024 08:26:52 +0000 https://stuff.co.za/?p=191264 If you’ve been using Google Chrome’s Incognito mode believing that the internet giant wasn’t tracking you, you haven’t been paying attention to American legal proceedings. The search company has offered to delete its store of Incognito data as part of a settlement for an ongoing class-action lawsuit concerning the browser ‘feature’.

The lawsuit was instituted in 2020 by users upset that Incognito didn’t prevent Google from tracking their browsing and other activity while the feature was active. At best it kept data about their activities off their computers, which makes sense since the most recent updates to the function are all about locking it down on the device it’s being used on.

Terror Incognito

The suit complains that this miscommunication has left Google in possession of a vast store of information that it’s not supposed to have. This covers personal data, habits, and other “intimate and potentially embarrassing things” that users are presumably not okay with sharing with a faceless international data company.

Deletion of this data collection is the main step Google is offering in the settlement but other steps are also being taken. Google will be more clear about which data is being collected by its products. Incognito users will also be able to block third-party cookies for five years. No monetary damages await the Big G in this settlement, but individual users are welcome to sue the company.

“The result is that Google will collect less data from users’ private browsing sessions, and that Google will make less money from the data,” said the company’s lawyers.

Google spokesman Jose Castaneda added, “We never associate data with users when they use Incognito mode. We are happy to delete old technical data that was never associated with an individual and was never used for any form of personalization.”

The implication here is that the technical data collected by the feature was used for some internal purpose at Google. It may or may not have served its purpose but there’s no clarity about what Google products or features have seen a benefit from data users were under the impression was a secret to everyone including Google. Still, the deletion, if approved by a California judge, will mark the end of these legal proceedings for the search giant.

Source

]]>
https://stuff.co.za/2024/04/02/google-offers-to-destroy-incognito-data/feed/ 0
Dating apps: Lack of regulation, oversight and competition affects quality, and millions stand to lose https://stuff.co.za/2024/03/31/dating-apps-lack-of-regulation-oversight/ https://stuff.co.za/2024/03/31/dating-apps-lack-of-regulation-oversight/#respond Sun, 31 Mar 2024 12:05:48 +0000 https://stuff.co.za/?p=191196 When Aleksandr Zhadan used ChatGPT to talk to over 5,000 women on Tinder, it was a sign of things to come.

As artificial intelligence becomes more sophisticated and easily available, online dating is facing an onslaught of AI-powered fraud. The industry, which is dominated by a small number of incumbents, has already proven slow to respond to long-standing problems on its apps. AI will be its moment of reckoning — there are even apps that can help people write their messages.

Opponents of dating apps may be happy to see the industry crash and burn. The rest of us should worry. Online dating plays an important, and I believe positive, role in our lives. It has made it easier for people to find relationships, and easier to find people with whom we are truly compatible.

As the industry careens towards disaster, regulators should be prepared to intervene.

 

View this post on Instagram

 

A post shared by Futurism (@futurism)

Real versus fake connections

Zhadan’s case shows one of the challenges AI poses for online dating. Now, when we chat with someone on one of the apps, we cannot know if their answers are written by a chatbot, nor can we know how many other people they are talking to simultaneously. We also can’t know if someone’s photos have been produced with the help of an AI image generator

But at least Zhadan was actually looking for love. Since the launch of ChatGPT in late 2022, the amount of outright fraud on dating apps, much of it powered by AI, has skyrocketed. According to cybersecurity company Arkose Labs, there was, between January 2023 and January 2024, a staggering 2,000 per cent increase in bot attacks on dating sites.

And this is just the beginning. AI is getting more powerful, and more convincingly human, all the time.

Even before AI appeared on the scene, fraud on dating apps was already a serious problem. Sign up for one of them and you’ll instantly find your feed clogged with an endless number of fake profiles. Most of them have been created for a specific purpose, which is to steal your money. Unfortunately, it works.

In 2023, 64,000 people in the United States admitted to being the victims of romance scams, most of which happen through dating apps — we can assume this is only a small portion of the actual cases.

The Federal Trade Commission measures the losses for the year at US$1.14 billion. This has been going on for years, and the app companies have done little to stop it.

Online connections, offline threats

Fraud is not the only challenge faced by dating app users. A quarter of them, mostly women, have been stalked by someone they met online. Even more tragic are the cases of people being assaulted or murdered.

There are other issues: prices on the apps have gone up steadily and innovation has come to a grinding halt. Ever since Tinder introduced the card stack in 2016, the design of the apps has hardly changed.

You swipe, match, message and hope for the best. It should perhaps be no surprise that customers are getting fed up.

Benefits to society

While online dating certainly has its share of long-standing critics, I have argued that, on balance, the apps are a benefit to users and to society. They are an efficient way to find partners, get us out of our social bubbles and encourage connections across class and race.

Precisely because of the important role the technology plays in our lives, we should pay attention to how the industry operates. The dating app companies are finally starting to do something to protect users.

But given how long fraud has plagued these apps, their response has been slow and pretty underwhelming. They need, at a minimum, better tools to detect fake accounts and remove them quickly. There is a lot more they could do as well.

They could require background checks for users, which polls show a majority of people support. They could put AI to use themselves, to flag signs of fraud during people’s private chats. And dating app companies could implement safety features to protect users when they meet in person, for instance making it easier to share with your friends or family the profiles of people you are meeting up with.

Dominant players

One explanation for the companies’ sluggish response will be familiar to any observer of big tech: the concentration of ownership. The dominant player, Match Group, owns over 40 different apps, including most of the well-known: Tinder, Match.com, OkCupid, Hinge and Plenty of Fish. Its only serious competitor for market share is Bumble, which also owns Badoo and Fruitz.

In the United States, Match Group and Bumble control over three-quarters of the market.

Anti-trust authorities have never given the industry any serious scrutiny. Presumably, they do not think online dating is important enough to deserve it. But these companies have a lot of control over one of the most intimate aspects of our lives.

Thirty per cent of all adults in the U.S., and over half of people under 30, have used a dating app at some point. One in 10 Americans is currently in a relationship with someone they met online.


Read More: Dating apps are accused of being ‘addictive’. What makes us keep swiping?


The costs of fraud and abuse, in both human and financial terms, are huge. And the anti-competitive pressures in the industry are strong, given the network effect built into online dating: we want to be on the apps that everyone else is on.

Regulators should finally get involved. They should hold the companies accountable for fraud and abuse on their apps in order to force them to innovate to protect users. They should look closely at the prices they charge customers for premium features. The ultimate solution may be to break up the sector’s dominant players, Match Group and Bumble, in order to create real competition.

The inventors of dating apps deserve credit for enabling millions of connections that would never have happened otherwise. But if things don’t change, the companies could be in trouble and millions of people could be lonelier as a result.


]]>
https://stuff.co.za/2024/03/31/dating-apps-lack-of-regulation-oversight/feed/ 0
Google is sticking AI into your search results whether you like it or not https://stuff.co.za/2024/03/26/google-sticking-ai-into-your-search-results/ Tue, 26 Mar 2024 08:50:35 +0000 https://stuff.co.za/?p=191134 Dread it. Run from it. Google’s AI will arrive all the same. And now it’s here. The search giant has, perhaps unsurprisingly, retrofitted Google Search with artificially intelligent results, generating a summary of your search and essentially bypassing the traditional trove of links you’d normally find up top. Ads on the other hand…

This isn’t exactly new. Google first announced what it calls the Search Generative Experience (SGE) in 2023 offering exactly what we’re describing — AI-powered Search results. The only difference? SGE is no longer an opt-in-only sort of deal. It’s more of a ‘if you want to keep using Search then you have to put up with this’ deal.

A new era for Search

Aptly named Search Engine Land was the first to pick up on the change, which is only limited to a small group of users in the US for now. The Google train moves fast, after all. Before long, it’ll be rolling out globally — South Africa included. And honestly, we’re not all that fussed about the decision. If anything, we welcome it.

See, Google won’t be injecting itself into every search that crosses your mind. According to SGE’s announcement from May 2023, it will limit “the types of queries where these capabilities will appear.” Specifically, it will only come into play if it feels its presence would be “truly additive” to the experience according to a spokesperson.


Read More: Google’s new ‘SIMA’ AI is your future co-op gaming buddy


When you’re looking up “how to cook mac ‘n cheese”, AI can get involved saving you from the hundreds of thousands of recipes in Search all vying for your attention with some sob story about how their love of cheese came from their Grandmother. It has the potential to be a real game-changer, massively improving the speed at which we access information.

As transformative as the change could be, it does beg the question: who’s losing out in all this? If Google’s AI is deciding what is “truly additive” to a result, how long before it feels the need to step in every time — all while pulling away valuable traffic from websites? Websites that offer up the very information Google is cribbing.

Source

]]>
Generative AI could leave users holding the bag for copyright violations https://stuff.co.za/2024/03/25/generative-ai-could-leave-users-holding-the/ Mon, 25 Mar 2024 07:08:17 +0000 https://stuff.co.za/?p=191105 Generative artificial intelligence has been hailed for its potential to transform creativity, and especially by lowering the barriers to content creation. While the creative potential of generative AI tools has often been highlighted, the popularity of these tools poses questions about intellectual property and copyright protection.

Generative AI tools such as ChatGPT are powered by foundational AI models, or AI models trained on vast quantities of data. Generative AI is trained on billions of pieces of data taken from text or images scraped from the internet.

Generative AI uses very powerful machine learning methods such as deep learning and transfer learning on such vast repositories of data to understand the relationships among those pieces of data – for instance, which words tend to follow other words. This allows generative AI to perform a broad range of tasks that can mimic cognition and reasoning.

One problem is that output from an AI tool can be very similar to copyright-protected materials. Leaving aside how generative models are trained, the challenge that widespread use of generative AI poses is how individuals and companies could be held liable when generative AI outputs infringe on copyright protections.

When prompts result in copyright violations

Researchers and journalists have raised the possibility that through selective prompting strategies, people can end up creating text, images or video that violates copyright law. Typically, generative AI tools output an image, text or video but do not provide any warning about potential infringement. This raises the question of how to ensure that users of generative AI tools do not unknowingly end up infringing copyright protection.

The legal argument advanced by generative AI companies is that AI trained on copyrighted works is not an infringement of copyright since these models are not copying the training data; rather, they are designed to learn the associations between the elements of writings and images like words and pixels. AI companies, including Stability AI, maker of image generator Stable Diffusion, contend that output images provided in response to a particular text prompt is not likely to be a close match for any specific image in the training data.

Builders of generative AI tools have argued that prompts do not reproduce the training data, which should protect them from claims of copyright violation. Some audit studies have shown, though, that end users of generative AI can issue prompts that result in copyright violations by producing works that closely resemble copyright-protected content.

Establishing infringement requires detecting a close resemblance between expressive elements of a stylistically similar work and original expression in particular works by that artist. Researchers have shown that methods such as training data extraction attacks, which involve selective prompting strategies, and extractable memorization, which tricks generative AI systems into revealing training data, can recover individual training examples ranging from photographs of individuals to trademarked company logos.

Audit studies such as the one conducted by computer scientist Gary Marcus and artist Reid Southern provide several examples where there can be little ambiguity about the degree to which visual generative AI models produce images that infringe on copyright protection. The New York Times provided a similar comparison of images showing how generative AI tools can violate copyright protection.

How to build guardrails

Legal scholars have dubbed the challenge in developing guardrails against copyright infringement into AI tools the “Snoopy problem.” The more a copyrighted work is protecting a likeness – for example, the cartoon character Snoopy – the more likely it is a generative AI tool will copy it compared to copying a specific image.

Researchers in computer vision have long grappled with the issue of how to detect copyright infringement, such as logos that are counterfeited or images that are protected by patents. Researchers have also examined how logo detection can help identify counterfeit products. These methods can be helpful in detecting violations of copyright. Methods to establish content provenance and authenticity could be helpful as well.

With respect to model training, AI researchers have suggested methods for making generative AI models unlearn copyrighted data. Some AI companies such as Anthropic have announced pledges to not use data produced by their customers to train advanced models such as Anthropic’s large language model Claude. Methods for AI safety such as red teaming – attempts to force AI tools to misbehave – or ensuring that the model training process reduces the similarity between the outputs of generative AI and copyrighted material may help as well.

Role for regulation

Human creators know to decline requests to produce content that violates copyright. Can AI companies build similar guardrails into generative AI?

There’s no established approaches to build such guardrails into generative AI, nor are there any public tools or databases that users can consult to establish copyright infringement. Even if tools like these were available, they could put an excessive burden on both users and content providers.


Read More: Why student experiments with Generative AI matter for our collective learning


Given that naive users can’t be expected to learn and follow best practices to avoid infringing copyrighted material, there are roles for policymakers and regulation. It may take a combination of legal and regulatory guidelines to ensure best practices for copyright safety.

For example, companies that build generative AI models could use filtering or restrict model outputs to limit copyright infringement. Similarly, regulatory intervention may be necessary to ensure that builders of generative AI models build datasets and train models in ways that reduce the risk that the output of their products infringe creators’ copyrights.


]]>
Can artists protect their work from AI? – BBC News nonadult
Undersea cables for Africa’s internet retrace history and leave digital gaps as they connect continents https://stuff.co.za/2024/03/17/undersea-cables-for-africa-internet-history/ Sun, 17 Mar 2024 12:00:25 +0000 https://stuff.co.za/?p=190876 Large parts of west and central Africa, as well as some countries in the south of the continent, were left without internet services on 14 March because of failures on four of the fibre optic cables that run below the world’s oceans. Nigeria, Côte d’Ivoire, Liberia, Ghana, Burkina Faso and South Africa were among the worst affected. By midday on 15 March the problem had not been resolved. Microsoft warned its customers that there was a delay in repairing the cables. South Africa’s News24 reported that, while the cause of the damage had not been confirmed, it was believed that “the cables snapped in shallow waters near the Ivory Coast, where fishing vessels are likely to operate”.

Jess Auerbach Jahajeeah, an associate professor at the University of Cape Town’s Graduate School of Business, is currently writing a book on fibre optic cables and digital connectivity. She spent time in late 2023 aboard the ship whose crew is responsible for maintaining most of Africa’s undersea network. She spoke to The Conversation Africa about the importance of these cables.

1. What’s the geographical extent of Africa’s current undersea network?

Fibre optic cables now literally encircle Africa, though some parts of the continent are far better connected than others. This is because both public and private organisations have made major investments in the past ten years.

Based on an interactive map of fibre optic cables, it’s clear that South Africa is in a relatively good position. When the breakages happened, the network was affected for a few hours before the internet traffic was rerouted; a technical process that depends both on there being alternative routes available and corporate agreements in place to enable the rerouting. It’s the same as driving using a tool like Google Maps. If there’s an accident on the road it finds another way to get you to your destination.

But, in several African countries – including Sierra Leone and Liberia – most of the cables don’t have spurs (the equivalent of off-ramps on the road), so only one fibre optic cable actually comes into the country. Internet traffic from these countries basically stops when the cable breaks.

Naturally that has huge implications for every aspect of life, business and even politics. Whilst some communication can be rerouted via satellites, satellite traffic accounts for only about 1% of digital transmissions globally. Even with interventions such as satellite-internet distribution service Starlink it’s still much slower and much more expensive than the connection provided by undersea cables.

Basically all internet for regular people relies on fibre optic cables. Even landlocked countries rely on the network, because they have agreements with countries with landing stations – highly-secured buildings close to the ocean where the cable comes up from underground and is plugged into terrestrial systems. For example southern Africa’s internet comes largely through connections in Melkbosstrand, just outside Cape Town, and Mtunzini in northern KwaZulu-Natal, both in South Africa. Then it’s routed overland to various neighbours.

Each fibre optic cable is extremely expensive to build and to maintain. Depending on the technical specifications (cables can have more or fewer fibre threads and enable different speeds for digital traffic) there are complex legal agreements in place for who is responsible for which aspects of maintenance.

2. What prompted you to write a book about the social history of fibre optic cables in Africa?

I first visited Angola in 2011 to start work for my PhD project. The internet was all but non-existent – sending an email took several minutes at the time. Then I went back in 2013, after the South Atlantic Cable System went into operation. It made an incredible difference: suddenly Angola’s digital ecosystem was up and running and everybody was online.

At the time I was working on social mobility and how people in Angola were improving their lives after a long war. Unsurprisingly, having digital access made all sorts of things possible that simply weren’t imaginable before. I picked up my interest again once I was professionally established, and am now writing it up as a book, Capricious Connections. The title refers to the fact that the cables wouldn’t do anything if it wasn’t for the infrastructure that they plug into at various points.

Landing centres such as Sangano in Angola are fascinating both because of what they do technically (connecting and routing internet traffic all over the country) and because they often highlight the complexities of the digital divide.

For example, Sangano is a remarkable high-tech facility run by an incredibly competent and socially engaged company, Angola Cables. Yet the school a few hundred metres from the landing station still doesn’t have electricity.

When we think about the digital divide in Africa, that’s often still the reality: you can bring internet everywhere but if there’s no infrastructure, skills or frameworks to make it accessible, it can remain something abstract even for those who live right beside it.

In terms of history, fibre optic cables follow all sorts of fascinating global precedents. The 2012 cable that connected one side of the Atlantic Ocean to the other is laid almost exactly over the route of the transatlantic slave trade, for example. Much of the basic cable map is layered over the routes of the copper telegraph network that was essential for the British empire in the 1800s.

Most of Africa’s cables are maintained at sea by the remarkable crew of the ship Léon Thévenin. I joined them in late 2023 during a repair operation off the coast of Ghana. These are uniquely skilled artisans and technicians who retrieve and repair cables, sometimes from depths of multiple kilometres under the ocean.

When I spent time with the crew last year, they recounted once accidentally retrieving a section of Victorian-era cable when they were trying to “catch” a much more recent fibre optic line. (Cables are retrieved in many ways; one way is with a grapnel-like hook that is dragged along the ocean bed in roughly the right location until it snags the cable.)

There are some very interesting questions emerging now about what is commonly called digital colonialism. In an environment where data is often referred to with terms like “the new oil”, we’re seeing an important change in digital infrastructure.

Previously cables were usually financed by a combination of public and private sector partnerships, but now big private companies such as Alphabet, Meta and Huawei are increasingly financing cable infrastructure. That has serious implications for control and monitoring of digital infrastructure.

Given we all depend so much on digital tools, poorer countries often have little choice but to accept the terms and conditions of wealthy corporate entities. That’s potentially incredibly dangerous for African digital sovereignty, and is something we should be seeing a lot more public conversation about.


]]>
Google’s Gemini showcases more powerful technology, but we’re still not close to superhuman AI https://stuff.co.za/2024/03/15/google-gemini-showcases-powerful-technology/ Fri, 15 Mar 2024 07:16:26 +0000 https://stuff.co.za/?p=190826 In December 2023, Google announced the launch of its new large language model (LLM) named Gemini. Gemini now provides the artificial intelligence (AI) foundations of Google products; it is also a direct rival to OpenAI’s GPT-4.

But why is Google considering Gemini as such an important milestone, and what does this mean for users of Google’s services? And generally speaking, what does it mean in the context of the current hyperfast-paced developments of AI?

AI everywhere

Google is betting on Gemini to transform most of its products by enhancing current functionalities and creating new ones for services such as search, Gmail, YouTube and its office productivity suite. This would also allow improvements to their online advertising business — their main source of revenue — as well as for Android phone software, with trimmed versions of Gemini running on limited capacity hardware.

For users, Gemini means new features and improved capacities that would make Google services harder to shun, strengthening an already dominant position in areas such as search engines. The potential and opportunities for Google are considerable, given the bulk of their software is easily upgradable cloud services.

But the huge and unexpected success of ChatGPT attracted a lot of attention and enhanced the credibility of OpenAI. Gemini will allow Google to reinstate itself as a major player in AI in the public view. Google is a powerhouse in AI, with large and strong research teams at the origin of many major advances of the last decade.

There is public discussion about these new technologies, both on the benefits they provide and the disruption they create in fields such as education, design and health care.

Strengthening AI

At its core, Gemini relies on transformer networks. Originally devised by a research team at Google, the same technology is used to power other LLMs such as GPT-4.

A distinctive element of Gemini is its capacity to deal with different data modalities: text, audio, image and video. This provides the AI model with the capacity to execute tasks over several modalities, like answering questions regarding the content of an image or conducting a keyword search on specific types of content discussed in podcasts.

But more importantly, that the models can handle distinct modalities enables the training of globally superior AI models, compared to distinct models trained independently for each modality. Indeed, such multimodal models are deemed to be stronger since they are exposed to different perspectives of the same concepts.

For example, the concept of birds may be better understood through learning from a mix of birds’ textual descriptions, vocalizations, images and videos. This idea of multimodal transformer models has been explored in previous research at Google, Gemini being the first full-fledged commercial implementation of the approach.

Such a model is seen as a step in the direction of stronger generalist AI models, also known as artificial general intelligence (AGI).

Risks of AGI

Given the rate at which AI is advancing, the expectations that AGI with superhuman capabilities will be designed in the near future generates discussions in the research community and more broadly in the society.

On one hand, some anticipate the risk of catastrophic events if a powerful AGI falls into the hands of ill-intentioned groups, and request that developments be slowed down.

Others claim that we are still very far from such actionable AGI, that the current approaches allow for a shallow modelling of intelligence, mimicking the data on which they are trained, and lack an effective world model — a detailed understanding of actual reality — required to achieve human-level intelligence.

On the other hand, one could argue that focusing the conversation on existential risk is distracting attention from more immediate impacts brought on by recent advances of AI, including perpetuating biases, producing incorrect and misleading content — prompting Google to pause its Gemini image generatorincreasing environmental impacts and enforcing the dominance of Big Tech.


Read More: Google Gemini replaces Bard as catch-all AI platform


The line to follow lies somewhere in between all of these considerations. We are still far from the advent of actionable AGI — additional breakthroughs are required, including introducing stronger capacities for symbolic modelling and reasoning.

In the meantime, we should not be distracted from the important ethical and societal impacts of modern AI. These considerations are important and should be addressed by people with diverse expertise, spanning technological and social science backgrounds.

Nevertheless, although this is not a short-term threat, achieving AI with superhuman capacity is a matter of concern. It is important that we, collectively, become ready to responsibly manage the emergence of AGI when this significant milestone is reached.


]]>
The capabilities of multimodal AI | Gemini Demo nonadult