The power of neural network and integration with Albato

Stable Diffusion: Neural Network and Integration with Albato
Stable Diffusion
·
3/20/2023
·
5 min. read

These days, image-generating neural networks have achieved a quantum leap. Now, when you see an illustration, you can't help but wonder if it was created by a digital artist, or if it was generated by a neural network.

In this article, we'll look at one of the popular models - Stable Diffusion - and how it can be integrated with other systems (e.g. Telegram) using Albato.

Stable Diffusion is a model that converts text into an image. To generate a picture, you need only send a text describing what you want to see in the picture.

This kind of text is called a prompt, and writing it is a totally different trend. After all, it is the quality of the prompt that determines how relevant your text is to your image that you will end up with.

All images created by Stable Diffusion are copyright free and have CC0 1.0 licence, which means that the generated images are free to distribute and use for any purpose.

Since Stable Diffusion is an open-source model, anyone can complement and enhance it. Thus, to get the Stable Diffusion API, we have chosen stablediffusionapi.com, which provides the largest number of Stable Diffusion-based models, beyond the standard version. You will have 50+ models available, among which, for example, there is a version of Stable Diffusion trained to produce images like in the Midjorney model.

What other Stable Diffusion modifications are available for generation? How do they differ and which one is better?

In addition to the basic Stable Diffusion v2, more than 50 community-created modifications are available for generation. All of them have their own characteristics and are used for different purposes. Let's look at a few examples:

Stable Diffusion v2

The original model, can be used for a vast range of cases. Example of a generated image:

1.png

MidJourney V4

The model trained on the dataset used to train the Midjorney neural network.

This is an example of a generated image:

2.jpeg

Protogen x3.4

The model which allows you to generate ultra-realistic images with particular precision to fine details.

Example of a generated image:

3.jpeg

SynthwavePunk

This model is tailored to generate images in the Synthwave and Punk styles.

4.png

LinkedIn Photoshoot

The model trained on a dataset of LinkedIn avatars

5.png

There is also a section on the website where you can find model modifications and see examples of images for them. Link to the section with different models.

How to write the perfect prompts

There is no single "formula for a good prompt", but the more detailed the request, the better the image will be generated.

Examples of prompts and images can be found in section of the website. If you enter “cats”, you can see all prompts containing the word “cats” and the generated images.

6.png

Here is a great article on how the use of different words and qualifiers changes the result of the generation.

7.png

Integration of Stable Diffusion with various systems

What to keep in mind when setting up integrations.

With Albato, you can implement various scenarios using Stable Diffusion - for example, generate images right in Telegram or automate the generation of images for a website/cover page on your blog.

Before setting up the integration, connect Stable Diffusion to Albato. We describe how to do this in our instructions.

How to connect Stable Diffusion to Albato

What is available in Albato:

1 trigger (an event which starts an automation)

  • Image generation finished

2 actions (what Albato performs when a trigger occurs)

  • Generating an image from text
  • Image generation from text (using public models)

If you choose the Generating image from text action, Albato will generate an image using the Stable Diffusion v2 model. If you choose an action (using public models), you can choose any of the 50+ models available.

The list of all available models can be found in your Albato account (Apps section → the List of public models tab).

8.png

You can also adjust different parameters when setting up actions with any of the models. These parameters will change the result of the generation, but only one of them is required - The text for image generation. We will briefly describe each of the parameters:

The text for image generation is a required field, here you need to enter a prompt - the text on the basis of which the neural network will generate the image.

ID public Model - here you can choose the model modification you want to use. This option is only available in the Image generation from text (using public models) action. It is not available in the Generating an image from text action, as it uses the default Stable Diffusion v2 model.

Number of images you want in response - the default value is 1

Items you don't want in the image - here you can specify the elements that should definitely not appear on the final image.

Width and height of output image - these fields can be used to set the size of the final image. By default, each parameter is 512.

Number of denoising steps - in a nutshell, this parameter determines the number of iterations of the final picture improvement (”noise reduction”), so the higher it is, the better the quality of the final result.

Scale for classifier-free guidance - is a parameter that determines how precisely the neural network should follow the Prompt. The higher this parameter, the "closer" the neural network follows your Prompt.

Improve prompts to achieve better results - here you can choose “yes” or “no”.

9.png

Stable Diffusion + Telegram Integration

This integration lets you generate pictures right in the Telegram chatbot.

You need to create two automations.

The first will send an image generation request to Stable Diffusion. There are 2 steps to set up:

1st - Incoming massage (Telegram)

2nd - Generating an image from text (Stable Diffusion)

10.png

When setting up action, you need to choose a value from the first step - Telegram message - in the Text to generate image field. The rest of the parameters are filled in according to your needs.

11.png

The second automation will return the generated image. You will need to set up 3 steps as in the example below:

12.png

The second step, receiving an HTTP request, is needed to get the generated picture in file format.

Set it up as follows: specify any name and any URL (at this point we need an "empty" request, so you can insert any link in the URL field).

13.png

Next, Albato will ask you to fill in the URL again - and here you need to choose the value from the previous step - Stable Diffusion (Link to the image).

Once you add the third step - Telegram (Send Photo) - the second automation setup is completed.

Let's run both automations and see how they work:

14.jpeg

This is how it will look in Telegram.

Stable Diffusion + Google Sheets Integration

This integration allows you to generate images, get links (URLs) for them and enter those URLs into Google Sheets.

This allows you to automate a lot of tasks, for example:

  • set up automations with any apps and systems that upload pictures via a link.

For example, you can set up URL transfer to Creatium, and automatically generate covers for your website or blog. We already have a video with a step-by-step instruction for this case, where only the Midjorney links to the generated images are inserted manually. Now, you can automate this step too!

  • upload the generated images to Google Drive.

The Google Sheets automation looks like this. When setting up the step with Google Sheets, specify the column for the links.

15.png


Stable Diffusion