What is DragGAN AI? Is the new tool better than Photoshop?

DragGAN AI is a new tool that allows users to edit images interactively using drag and drop controls that can change the way users edit images. Here’s what you need to know about the new artificial intelligence!

What is DragGAN AI?

A team of researchers from the Max Planck Institute for Informatics and MIT CSAIL have created a new image-editing tool called DragGAN, where users can interactively reshape and retouch images using technology. WHO.

DragGAN is derived from the “Drag-Based Adversarial Network” (GAN) that can create images according to the user’s needs, unlike other editing tools like Photoshop that only distort or crop existing pixels. .

RIP Photoshop.

With just a few clicks you will be able to edit any image EXACTLY the way you want it. 🤯pic.twitter.com/Nck3i50Mwb

— Lorenzo Green ️ (@mrgreen)
May 19, 2023

The study mentioned “Through DragGAN, anyone can distort an image with precise control over the position of the pixels, manipulating the pose, shape, expression, and arrangement of different categories. like animals, cars, people, landscapes, etc.

“Since these operations are performed on a GAN-trained generalized image splitter, they tend to produce realistic results even for challenging situations such as closed content illusions and shape consistent deformation according to the stiffness of the object.”

How to use DragGAN AI?

DragGAN is still a blank sheet of paper, which means users cannot test it. It’s basically research that hasn’t been made public and users don’t get any app or website to use its features.

Users can use DragGAN AI by following these steps:

  • Visit the DragGAN website (still under development)
  • Click the “Upload Image” button and select the desired image
  • Click a point on the image and drag it to the desired position
  • When you release the point, the image moves to the desired position.
  • Continue to adjust the image as needed.
  • When you’re done editing, click the “Save” button to save the updated image.

Jagranjosh

Source: Max Planck Institute for Informatics

The research paper states that “Our DragGAN method allows users to “drag” the content of any GAN-generated image. The user simply clicks on a few handles (red) and target points (blue) on the image and our approach moves the handles to precisely approach the same target points. response.

“Users can optionally draw the mask in a flexible area (lighter area), keeping the rest of the image fixed. This flexible point-based operation allows control of many spatial properties such as posture, shape, expression, and arrangement on different types of objects.”

What are the features of DragGAN AI?

Here are some features of DragGAN AI:

  • Point-based editing – The AI ​​platform allows users to edit images directly on the screen by dragging and dropping points. This allows for more precise and realistic adjustments than with traditional image editing software.
  • DragGAN allows users to integrate 3D models – DragGAN goes a step further by creating 3D models of images. This allows the user to change the position, shape, appearance, and arrangement of objects in the image while maintaining logic and reality.
  • User-friendly interface – DragGAN promises a simple and easy experience for experienced retouchers and beginners using AI image editing tools. The interface is designed to simplify the editing process and allow the user to easily achieve the desired result.
  • Potential to revolutionize image editing – DragGAN has the potential to change the way we approach image editing due to its unique features. Combining point-by-point editing and 3D modeling, DragGAN pushes the boundaries of what is possible, giving users new avenues for artistic expression.

The study mentioned “We are conducting an extensive DragGAN evaluation on various datasets including animals (lions, dogs, cats and horses), people (faces and full bodies), cars and landscapes.

“Our approach efficiently moves user-defined processing points to target points, achieving different manipulation effects across multiple object categories.

“Unlike conventional methods to deform shapes by simply applying bending [Igarashi et al. 2005]Our deformation is performed on the learned GAN image manifold, which tends to follow the structures underlying the lying objects.

“Our approach can, for example, give the illusion of obscured content, such as teeth from within

a lion’s mouth, and can be deformed according to the hardness of the object, like bending a horse’s leg. We are also developing a GUI for users to do interactive actions by clicking image.”

Categories: Trends
Source: newstars.edu.vn

Leave a Comment