End or second wind: how neural networks change the world of fine art
Fine art has always been one of the main products of human culture. For many centuries, it allowed people to express themselves and tell stories.
First, cave painting appeared, then – oil paintings and photography. Now the era of “fine” artificial intelligence and, in particular, neural networks came.
FORKLOG found out which AI models are used to work with pictures and whether such systems can replace artists.
- Researchers began to use algorithms to create images in 1950-1960.
- Neural networks allow you to copy the styles of artists, turn sketches into photorealistic illustrations, “revive” portraits and create new images.
- The cost of developing and training the algorithm varies from zero to hundreds of millions of dollars.
- AI-art can inspire, but its availability can create a number of problems.
A brief history of and art
The history of the generated AIS of art can be traced before the opening of machine graphics and the invention of the computer. Then the researchers used basic algorithms to create simple patterns and forms.
In 1967, a German mathematician and scientist Frieder Naka developed a portfolio called Matrix Multiplications, consisting of 12 images. He created a square matrix and filled it with numbers that were sequentially multiplied by itself.
The researcher translated the results into the images of the specified intervals, where he appropriated a visual sign of a certain shape and color to each value. Then he placed the figures in the raster in accordance with the values of the matrix.
In his works, Naku often used a random number generator and probably partially automated the multiplication process.
In 1973, the artist Harold Cohen developed a set of Aaron algorithms, capable of drawing “by hand” certain objects. He found that the system began to create previously unknown forms.
First, the program generated abstract paintings, and then learned to draw more complex figures, including stones, plants and people.
Since 1990, researchers and artists began to use AI models in robotics, teaching cars to create paintings and sculptures.
In 2015, Google Engineer Alexander Mordvintsev launched a computer vision program of Deepdream, which uses a buckthorn neural network to search and improve patterns in images using algorithmic paradiodolia .
The principle of operation of the system is to distort the source picture in accordance with what fragments of its models resemble these or another objects.
When Google published an approach and opened the source code of the algorithm, many tools and services appeared on the market that allow everyone to convert their photos into “psychedelic” images.
In 2022, AI-art is used in various fields, including marketing, fashion and entertainment.
Models also help create paintings.
Neural networks for working with images
2022 may go down in history as the time when AI-art became mainstream. A boom of high -quality, built on different algorithms of tools makes neurophage accessible to everyone who has a smartphone with an Internet connection.
AI models allow you to copy the styles of artists, turn sketches into photorealistic illustrations, “revive” portraits and create new images. Excellent or similar approaches and tools are used for different tasks.
The neural style of style (NST) is a method based on adherent neural networks that allows you to create a picture that imitates another image according to the manner of execution. The user can convert a photo of the running dog into an engraving Katsusiki Hokusai or generate Mona Lisa’s brush Yana Vermaur .
For the creation of new works of art or paintings using the style of other images, generative and consistent neural networks (GAN) are responsible. These are algorithms consisting of two models at once: a generator that makes content, and a discriminator that evaluates it.
GAN systems can draw images similar to pictures from a set of training data, including faces of people, muzzles of cats, furniture and other objects.
Also, generative neural networks will help to “revive” the sketch of the landscape.
The motto of such systems is “print and get”. The user needs to come up with any request in a natural language like “Lama with dreadlocks in an astronaut costume” and the algorithm generates the picture in accordance with the hint.
Text descriptions can consist of a huge number of words, the addition or deletion of which can radically change the result. They have a key role in creating images. There are even special marketplaces, where those who wish can purchase a request for a specific style for money for money.
Developers teach AI generators on huge arrays of images and their text descriptions, training the model to look for a connection between them. They also often use the process of diffusion – the algorithm begins generation with a set of random points and gradually improves the image, bringing it to a given hint and getting rid of the noise.
Most popular AI generators have restrictions on the creation of content: they cannot portray nudity, violence, realistic faces or political figures. Among such tools Openai Dall-E 2, Google Imagen and Midjourney. Sometimes their use is paid.
However, there are systems without such restrictions like Stable Diffusion. The Stable Ai tool developer company said the model does not have filters and can create any content.
Image generators can be used to finalize ready -made works. In August, Openai introduced the Outpainting function that allows Dall-E 2 to expand the paintings using tips.
How much does it cost to develop a neural network?
This is the most ambiguous question. Answer to him: from zero to several hundred million dollars.
To begin with, to create and training AI-algorithm, knowledge is needed. Users without programming skills and desire to spend money on courses first need to deal with the principles of neural networks. There are many free articles, resources and services like the Teachable Machine educational google design that can help in this.
You will also need knowledge of the programming language like Python and the library for the development and training of neural networks – Tensorflow, Pytorch or other.
In addition, it is necessary to collect a training dataset for the required task: it can be created independently, take from open sources or buy. To develop an image generator on request, a set of pictures and their text descriptions will be required.
The accuracy of the model’s work directly depends on the quality and quantity of data. This is also affected by the equipment used and the spent computing resources.
In the presence of all of the above, you can create a neural network for working with images for free.
However, large companies like Meta, Amazon, Apple, Microsoft and Alphabet invest tens of billions of dollars in such products. Costs include research, development, training, performance verification, deployment, commercialization and support of technology. Sometimes they spend years on this process and, as a result, the project can be closed or, conversely, make it indispensable.
Advantages and disadvantages of visual and algorithms
Among the advantages of using neural networks to create https://gagarin.news/news/binance-accused-of-becoming-a-hotbed-of-crime/ works of art, you can highlight the generation of realistic data. Such images will find application in films, advertising, games and other areas.
AI-algorithms are unusual “think”. They are able to create previously unknown images, unusually arrange objects and arbitrarily mix textures. Such art can be a source of inspiration for more significant projects.
Due to the constant modernization of technology and data, AI-art is also developing and constantly brings new ideas.
In addition, algorithms can speed up the solution of some problems. With the help of neural networks, you can create logos, clips and use for marketing purposes.
Among the minuses, it is worth highlighting the lack of human emotions. Sometimes this is an advantage, but when creating a work of art, many people need history.
Due to the limited training sets, data and art can become boring. Without constant modernization and training on new dataset, the generated images will begin to repeat and lose uniqueness.
Also, developers cannot control the creative process of neural networks. After training, the algorithm will derive the result based on the established weights, and, if it does not suit, the model will have to retrain.
But the main problems of using AI relate to ethics. Developers cannot always control the distribution and use of technology. Algorithms cannot be considered authors of works, but the creators are responsible for their incorrect “behavior”.
Due to the accessibility of technology, attackers can create images to deceive people, thefts of their personal data and spread the language of hatred with the help of AI.
Whether the neural networks will replace artists?
Once upon a time a photograph was considered a new trend in creativity. After almost 200 years of existence, she did not replace artists and artists, but forced them to develop and adapt.
This has created a new generation of creative people. Artists and photographers began to create works together that could surprise, attract and encourage thoughts about beauty.
Art, in whatever form it manifests itself, makes people feel. And there is a lot of space for new artistic faces that can cause previously unknown sensations.
The creators of generative AI can slightly shift the existing forms of creativity, but will not destroy them.
Tools like Dall-E 2, Stable Diffusion and Midjourney will probably continue to transform into very complex art engines and help artists to complement their work.
With the sufficient and constant development of neural networks, people will be able to regularly use technology for inspiration and expand their ideological capabilities.
Subscribe to FORKLOG news at Telegram: Forklog AI – all news from the world of AI!