How to create your first video!

Selecting the AI and defining your first frame

  1. Homepage - Create new video

  2. You’ll be prompted to create an account

  3. We’re now ready to create our first video – the 1st step is to select the AI you want to use for your video, we offer different models each with their own art style. You can learn more about the standard and custom AI models here!

  4. After choosing the AI model, you have to define the first frame you want to use for the video ⇒ the first frame can either be (a) an image you will render through AI via a text prompt, or (b) an image you already have. Here you can quickly learn how to optimise your prompt.

  5. After choosing the first image you’re redirected to the Video Editing page, where you’ll edit and export your video!

Producing your video

  1. The video editor allows you to:

    1. Render video from prompt

    2. Add new video sections

    3. Edit video sections Checkout the knowledge base for an explanation on how to best use Neural Frames video editor.

  2. When you’re finished editing, you need to export the video

  3. You’re then redirected to your library where you can download and export the .mp4 video

 

Knowledge base

AI models

Standard AI models

Neural Frames offers several standard AI models with different styles.

We will continue to add more models to Neural Frames as the space evolves… and it’s evolving really fast, credit to all the awesome creators generating these powerful models.

Custom AI models

Neural Frames also offer you the possibility to create your own custom model!!

This feature is only available to our paid subscribers.

Note: the UX is still very much work in progress. Please be patient with us :)

Creating the first image for your video

Prompt and instructions

Prompt

This is a piece of text that serves as a starting point for the AI to generate new content.

(Optional) Negative Prompt

This is a constraint or instruction we give to the model to avoid generating certain types of content or output. We can do that by either inserting specific keywords (e.g.: “hands”, “faces”, …) or patterns (e.g.: “No windows in the buildings”).

Video format

This lets you define the type of format you want your video to be produced in. Currently there are the following options: 1:1, 16:9 and 9:16

CFG

This is a configuration value for the AI model, and in this case, it determines how creative the model can be on how he interprets your prompt (text instruction). The range for this parameter goes from 0 to 20, where 0 means it will strictly follow your prompt, whereas 20 will make the model be very creative. By default Neural Frames sets this value to 7.5 but you can explore with different creativity levels to see what works best for you.

Actions

Render

This triggers the model to run following the prompt and instructions defined.

Rendering results in 4 images for you to choose from, and if you need you can adjust the prompt and render new images until you find the right one :)

Pimp my prompt

Pimp my prompt takes your initial prompt and asks an LLM (GPT-4) to generate a more detailed version. This feature works really well with short prompts – for example, consisting of a few keywords. By seeing the “pimped” outputs it can also help you become better acquainted with how to generate performant AI prompts.

This is a great feature when you have an initial top-level idea of what you want, but aren’t sure about the details (e.g. depth, art style, …).

 

Producing your video

The Video Editor includes 3 components: (1) prompt and instructions, (2) video timeline editor and (3) video preview. Below there’s a short guide on how to use them.

Every time you render a new block of video, your video is automatically saved and made available in your user library. Even if your browser or computer crashes, you can still recover all of your work.

Prompt Settings

Prompt

This is a piece of text that serves as a starting point for the AI to generate new content.

(Optional) Negative Prompt

This is a constraint or instruction we give to the model to avoid generating certain types of content or output. We can do that by either inserting specific keywords (e.g.: “hands”, “faces”, …) or patterns (e.g.: “No windows in the buildings”).

Strength

This parameter allows you to define how much the frames the model is generating resemble each other. The higher the value, the less the new frame will resemble the previous one.

Here are a few examples on how you can use this parameter to your advantage:

Smooth

The smooth parameter allows you to control the smoothness of your video as it plays. The scale goes from 0 to 6.

This works through a process called frame interpolation.

Zoom

This parameter allows you to define how much zoom your next frames will have, i.e. if you want your video motion to close up, you would start increasing the zoom parameter or decreasing if you want to zoom out.

This parameter is set to 0 by default and can be a value between -20 and 20.

Video block stats & delete

At the bottom for the Prompt Settings you can find the start position of your video block and its duration.

Next to these two data points you have a delete button, where you can delete that video block.

Video timeline editor

The video timeline editor consists of a timeline with 1 queue displaying video and interpolation blocks. This is where you edit your video.

Rendering video

This button triggers the video generation process. You can start and interrupt as needed – oftentimes you’ll start generating frames, and interrupt the generation to adjust the prompt settings (as needed).

Please note that If you re-render previously rendered video, the previous video is overwritten by the new one and lost. Since it’s an AI generating all the content, there is no way to make it generate the exact same content.

(a) Video blocks and (b) interpolation blocks:

  1. Video blocks, as the name suggests, are sections of the video where you can render new frames based on the prompt and instructions you define

  2. Interpolation blocks are sections of video where frame interpolation is performed. We use frame interpolation to generate smooth transitions between two video frames. By default, when you create a new (a) video block we add an interpolation block to ensure there’s some sort of transition between the two (a) video blocks.

When you want new video content to be generated you use (a) video blocks, and when you want to transition between different video content you use (b) interpolation blocks.

Video editing

In the timeline you can,

Video preview

The video preview allows you to watch your video. You also have the ability to move the cursor back to the start or the last generated frame.

 

Account library

The account library stores all your projects - finished or unfinished. You can edit your projects, or download the exported projects.

Note: to be able to download your video, you first need to Export it.