Enabling Secure and Positive Communication in Educational Networks: Leveraging AWS Rekognition for Content Filtering

[4-minute read]

In this article, we showcase the successful integration of AWS Rekognition within START platform from the START Foundation—one of the Germany’s prominent foundations—, highlighting the role of such technology in bolstering safety and security measures, reducing the risk of having racism, violence, and obscene content.

START Foundation has an extensive ecosystem of services provided by Cloudnonic. Among these services, there is an internal educational social network that empowers the community to share experiences, thoughts, and provide valuable feedback, keeping everyone informed about upcoming events. This timeline of Posts—without specific filters—can be seen on both the website and mobile app.

The foundation aimed to address the lack of filters for their posts by seeking an automated solution that could filter inappropriate image content within its application services.

After researching various content-filtering tools like Google Perspective, Cloud Vision, and Open AI the foundation opted for AWS Rekognition for four specific reasons: 

  1. Amazon Rekognition featured a straightforward and user-friendly API, capable of analyzing any image or video file stored in Amazon S3. The service was also able to identify objects, people, text, scenes, and activities—highlighting inappropriate content with a high confidence level.
  2. Given that the foundation already manages its infrastructure in AWS, we only had to set up the SDK in the project.
  3. AWS Rekognition has great documentation and support for the project’s programming language, facilitating its implementation.
  4. AWS Rekognition was in line with the budget constraints of the foundation.

Now, for those curious about the implementation process, I can tell you it was remarkably straightforward! 

But first, let me explain how AWS Rekognition works:

Using DetectModerationLabels, we first define the minimum confidence level for moderation, ranging between 1 and 100—being 1 the most inappropriate image. This parameter enables us to fine-tune the program’s sensitivity, determining how images containing inappropriate content are labeled.

E.g.:

'MinConfidence' => 50, //Minimum confidence level for moderation labels
'ModerationLabels' => [
 'Explicit Nudity',
 'Violence',
 'Visually Disturbing',
 'Rude Gestures',
 'Drugs',
 'Alcohol',
 'Hate Symbols'
 ],

After analyzing the uploaded image, the API provides the result. If the confidence level is below the configured threshold, we will prevent the image from being published and send it to an admin for manual review. 

This is an example (JSON response):

{ 
"ModerationLabels": [
    {
        "Confidence": 99.24723052978516,
        "ParentName": "",
        "Name": "Violence"
    },
    {
        "Confidence": 99.24723052978516,
        "ParentName": "Alcohol",
        "Name": "Alcohol"
    },
    {
        "Confidence": 88.25341796875,
        "ParentName": "Explicit Nudity",
        "Name": "Sexual Activity"
    }
]
}

To implement AWS Rekognition, we made some adjustments to both the backend, developed in PHP and the frontend, developed in ReactJS.

Let’s talk. Cloudnonic is here to help you harness the power of custom software solutions to not only catch your audience’s eye but keep it. Pick a time for a free software audit here: Schedule a call with me

This PHP code snippet iterates over each uploaded image received through an endpoint and performs moderation label detection using the AWS Rekognition service. Here’s a sample of the code:

This React code snippet handles the rejected action for creating a post, specifically when the post contains explicit content in an image.

It is necessary to incorporate such filters when our application is open to the community. With these filters implemented, we can mitigate various risks and dangers associated with unrestricted content generation, such as uploading explicit images. 

Failing to filter such content could result in exposing inappropriate content to users who might have a hard time processing it, or it being leaked to the press, significantly damaging the foundation’s reputation.

Above all, our goal has been to discourage undesirable behaviors, such as posting inappropriate content such as nudity, substance abuse, and violence. The implementation of these filters not only enhances the integrity of our platform but also underscores our commitment to promoting a positive and responsible community. 

See how we can implement this for your business

Comments

2 responses to “Enabling Secure and Positive Communication in Educational Networks: Leveraging AWS Rekognition for Content Filtering”

  1. gold silver ira Avatar

    Hello, I discovered your blog and I agreed with this entry in particular. . How can I learn more?

    1. Tatiana Perez Avatar
      Tatiana Perez

      Hey there, I’m glad you’re here! If you enjoy our content, please subscribe our blog. We’ll be sharing more articles like this in the future

Leave a Reply

Your email address will not be published. Required fields are marked *