Archive | March, 2022

Let’s go: On to 6G!

13 Mar

We are committed to the 5G evolution towards 6G

Every eight to ten years a new mobile network generation is popping up. But the 5th generation of mobile communications is quite different from the previous generations and a lot of promises and expectations are related to it. For the first time, this is a pure software-based network that, per definition, could be changed and evolved. So we are convinced that 6G will mainly be an evolution of 5G but with disruption originating from the inclusion of terahertz communications.

6G is also associated with new business and operation models, green ICT considerations, more agile network deployments and run-time optimizations based on machine learning and artificial intelligence, closer RAN-Core integration due to end-to-end virtualization, non-terrestrial network integration, as well as terahertz communication paving the way for new potentialities in the context of sensing and data capacities as a new 6G RAN technology.

Campus networks are driving 5G innovations

We truly believe, that globally emerging 5G enterprise/campus networks are driving the major 5G innovations as many 5G application domains such as manufacturing, health, mining, events, etc. are promoting a localized spectrum usage in higher frequencies and allow for a much faster and customized deployment of innovative secure and resilient end-to-end network infrastructures with highly flexible operation models. This is the motivation for our continued R&D in the Open5GCore and 5G Playground.

CampusOS: ecosystem for open 5G campus networks

Fraunhofer FOKUS is coordinating together with Fraunhofer HHI the project CampusOS, which is funded by the German Ministry for Economical Affairs and Climate Protection with 18 million euros. CampusOS started in January 2022 for a duration of three years and 4 academic and 18 industry partners, covering equipment and software providers, integrators and application providers, aim to develop an ecosystem for emerging open campus networks. Current Open RAN discussions can be regarded as drivers towards open, multi-vendor campus networks. This should be enabled by the development of suitable reference architectures, functional component catalogues, end to end deployment and operation blueprints, as well as open reference test sites, where the FOKUS 5G playground is representing one of these test sites. A major target of this project is to extend the Open5GCore to “OpenRAN-Readiness”.

6G SENTINEL: Fraunhofer lighthouse project 

Fraunhofer FOKUS is part of the 6G SENTINEL lighthouse project, a lead project of the Fraunhofer-Gesellschaft started in 2021, to develop key technologies for the upcoming 6G mobile communications standard. A first white paper published end of 2021 provides the first results and a major target of this project is to extend the Open5GCore to “6G-Readiness”.

6G Hubs for Germany

Since August 2021, the German Federal Ministry of Education and Research is funding the establishment of four hubs for research into the future technology 6G with up to 250 million euros. Fraunhofer FOKUS is actively contributing to two of them over the next four years: the “6G Research and Innovation Cluster” (6G-RIC) and the Open6GHub. A major target of these projects is to develop a new “Organic 6G Core”, enabling from 2026 onwards the early prototyping of emerging 3GPP 6G standards.

Please watch out for the Open5GCore roadmap and look for our future releases aiming to enable, from the beginning of 2022, applied “becoming 6G-ready” research.

By: Prof. Dr. Thomas Magedanz
Source: https://www.6g-ready.net/ 13 03 22

Is Artificial Intelligence Undermining The Legal System?

7 Mar

AI concept

Let’s consider an interesting potential legal case that might be arising sooner than you think.

A prosecutor announces the filing of charges against a well-known figure. This riles up ardent fans of the popular person. Some of those fans are adamant that their perceived hero can do no wrong and that any effort to prosecute is abundantly unfair, misguided, and altogether a travesty of justice.

Protests ensue. Rowdy crowds show up at the courthouse where the prosecutor is typically found. In addition, protesters even opt to stand outside the home of the prosecutor and make quite a nuisance, attracting outsized TV and social media attention. Throughout this protest storm, the prosecutor stands firm and states without reservation that the charges are entirely apt.

All of a sudden, a news team gets wind of rumours that the prosecutor is unduly biased in this case. Anonymously provided materials seem to surely showcase that the prosecutor wanted to go after the defendant for reasons other than the purity of the law. Included in the trove of such indications are text messages by the prosecutor, emails by the prosecutor, and video snippets in which the prosecutor clearly makes inappropriate and unsavoury remarks about the accused.

Intense pressure mounts to get the prosecutor taken off the case. Likewise, similar pressure arises to get the charges dropped.

What Should Happen?

Well, imagine if I told you that the text messages, the emails, and the video clips were all crafted via the use of AI-based deepfake technologies. None of that seeming “evidence” of wrongdoing or at least inappropriate actions of the prosecutor are real. They certainly look to be real. The texts use the same style of text messaging that the prosecutor normally uses. The emails have the same written style as other emails by the prosecutor.

And, the most damning of the materials, those video clips of the prosecutor, are clearly the face of the prosecutor, and the words spoken are of the same voice as the prosecutor. You might have been willing to assume that the texts and the emails could be faked, but the video seems to be the last straw on the camel’s back. This is the prosecutor caught on video saying things that are utterly untoward in this context. All of that could readily be prepared via the use of today’s AI-based deepfake high-tech.

I realise it might seem far fetched that someone would use such advanced technology simply to get the prosecutor to back down. The thing is, the ease of access to deepfake creating capabilities is increasingly becoming as straightforward as falling off a log. Nothing expensive about it. You can readily find those tools online via any usual Internet-wide search query.

You also don’t need to be a rocket scientist to use those tools. You can learn how to use the deepfake creating facilities in just an hour or less. I dare say, a child can do it (and they do). The AI takes care of the heavy lifting for you.

Lest you think that the aforementioned scenario about the prosecutor is outsized and won’t ever happen, I bring to your attention a recently reported circumstance that made for interesting headlines. Recent headlines blared that cybercriminals planted criminal evidence on a lawyer that is a human rights defender.

This is perhaps more daunting than the prosecutor scenario in that the so-called incriminating evidence was inserted into the electronic devices customarily used by the lawyer. When the devices were inspected, the disreputable materials seemed to have been created by the lawyer. Unless you knew how to look carefully into the detailed bits and bytes, it would seem that the attorney indeed self-created the scandalous materials.

According to the news coverage, this took place in India and was part of an ongoing plot by cybercriminals that are carrying out an Advanced Persistent Threat (APT) type of cyberattack against all manner of civil rights defenders. The evildoers are targeting attorneys, reporters, scholars, and just about anybody that they believe ought to not be doing any noteworthy legal-oriented civil rights actions.

The presumed intent of the planted content is to discredit those that are involved in human rights cases. By seeding the targeted computers with untoward materials, a later startling reveal can at the right time cause the unsuspecting victim to be claimed as a villain or otherwise having appeared to commit some other crime or misconduct that can undercut their personal and professional efforts as a civil rights proponent.

You never know what evil might lurk on your own electronic devices (keep a sober eye on your smartphone, laptop, personal computer, etc.)

Using AI To Make Lawyers Look Like Crooks

The incident that was reported as occurring in India could definitely happen anywhere in the world. Given that your electronic devices are likely connected to the Internet, it is feasible to do a cyber break-in by someone in their pyjamas on the other side of the globe. Make sure to have all of your cybersecurity protections enabled and kept up to date (this won’t guarantee avoiding a break-in, though it reduces the odds). Do ongoing electronic scans of your devices to try and early detect any adverse implants.

There wasn’t reported indication of whether the planted materials were made by hand or via the use of an AI-based deepfake system. Text messages and emails could easily be prepared by hand. No need to necessarily use an AI system to do that. The video deepfakes are a lot less likely done by hand per se. You would pretty much need a reasonably good AI-based deepfake tool to pull that off. If the deepfake is crudely prepared, this would allow the victim to potentially with relative ease expose the videos as fakery.

We all know that video and audio are the most powerful of deepfake productions. You can usually persuasively argue that texts and emails weren’t originated by you. The problem with video and audio is that society is enamoured of something they can see with their own eyes and hear with their own ears. People are only now wrestling with the realisation that they should not at face value trust the video and audio they perchance come across. Old habits of immediate acceptance are hard to be overcome.

It used to be that the AI used for deepfakes was quite crude. You could watch a video and with a scant modicum of inspection realise that the video must be a fake. No more. Today’s AI generators that produce deepfake video and audio are getting really good at the fakery. The only way nowadays to try and reveal a fake video as being fake tends to involve using AI to do so. Yes, ironically, there are AI tools that can examine a purported deepfake and attempt to detect whether fakery was used in the making of the video and the audio (there are telltale trails sometimes left in the content).

This AI versus AI gambit is an ongoing cat and mouse game. Improvements are continually being made in the AI that produces deepfakes, and meanwhile, improvements are equally being made in the AI that tries to ferret out deepfakes. Each tries to keep a step ahead of the other.

Final Thoughts

So, be on the watch for getting AI-based deepfake materials produced about you.

This won’t be happening on any widespread basis in the near term. On the other hand, in a few years the likelihood of using AI-based deepfakes in a nefarious way toward attorneys, judges, and likely even juries are going to expand. Ease of use, low cost, and awareness are all that it takes for evildoers to employ AI-based deepfakes for foul purposes, especially if a few successes get touted as having undercut the wheels of justice in any notable fashion.

You should also be on your toes about the use of AI-based deepfakes underpinning evidence that is attempted to be introduced at trial. Do not be caught off-guard. You can decidedly bet that both criminal and civil trials will soon enough be deluged with evidence that might or might not be crafted via AI-based deepfakes. The legal wrangling over this is going to be constant, loud, and add a hefty new wrinkle to how our courts and our court cases get handled.

Author: Dr Lance Eliot 
Source: https://www.lawyer-monthly.com/2022/03/is-artificial-intelligence-undermining-the-legal-system/ 07 03 22

Design a site like this with WordPress.com
Get started