How to create your first video! Selecting the AI and defining your first frameProducing your videoKnowledge baseAI modelsStandard AI modelsCustom AI modelsCreating the first image for your videoPrompt and instructionsPrompt(Optional) Negative PromptVideo formatCFGActionsRenderPimp my promptProducing your videoPrompt SettingsPrompt(Optional) Negative PromptStrengthSmoothZoom Video block stats & deleteVideo timeline editorRendering video(a) Video blocks and (b) interpolation blocks:Video editingVideo previewAccount library
Homepage - Create new video
You’ll be prompted to create an account
We’re now ready to create our first video – the 1st step is to select the AI you want to use for your video, we offer different models each with their own art style. You can learn more about the standard and custom AI models here!
After choosing the AI model, you have to define the first frame you want to use for the video ⇒ the first frame can either be (a) an image you will render through AI via a text prompt, or (b) an image you already have. Here you can quickly learn how to optimise your prompt.
After choosing the first image you’re redirected to the Video Editing page, where you’ll edit and export your video!
The video editor allows you to:
Render video from prompt
Add new video sections
Edit video sections Checkout the knowledge base for an explanation on how to best use Neural Frames video editor.
When you’re finished editing, you need to export the video
You’re then redirected to your library where you can download and export the .mp4 video
Neural Frames offers several standard AI models with different styles.
We will continue to add more models to Neural Frames as the space evolves… and it’s evolving really fast, credit to all the awesome creators generating these powerful models.
Neural Frames also offer you the possibility to create your own custom model!!
This feature is only available to our paid subscribers.
Note: the UX is still very much work in progress. Please be patient with us :)
What is it?
A custom model is an AI model trained on an image of either an object, a style, or an individual
It allows you to generate a video in the <insert AI style> and invoke the object/style/individual you trained to your video! To invoke the element you trained in the prompt you need to use the following keyword: sks “<object/style/type>”. \ Example “sks shoe”, “sks woman”, “sks art”
How does it work?
First you need to upload between 10 and 20 images of the object/style/individual. We recommend you try and upload the maximum possible.
Once that’s done we’ll train your model, and when it’s ready make it available under the “Custom models” section. You can then select that model and proceed with the normal user flow of uploading an image or creating a first frame from the prompt.
As explained above, you’ll use “sks <keyword>” to invoke the image you trained in the model
How long does it take to train the model?
The model usually takes between 5-10 minutes to complete training.
What happens to the uploaded images?
They’re used to train your model and are then deleted
This is a piece of text that serves as a starting point for the AI to generate new content.
This is a constraint or instruction we give to the model to avoid generating certain types of content or output. We can do that by either inserting specific keywords (e.g.: “hands”, “faces”, …) or patterns (e.g.: “No windows in the buildings”).
This lets you define the type of format you want your video to be produced in. Currently there are the following options: 1:1, 16:9 and 9:16
This is a configuration value for the AI model, and in this case, it determines how creative the model can be on how he interprets your prompt (text instruction). The range for this parameter goes from 0 to 20, where 0 means it will strictly follow your prompt, whereas 20 will make the model be very creative. By default Neural Frames sets this value to 7.5 but you can explore with different creativity levels to see what works best for you.
This triggers the model to run following the prompt and instructions defined.
Rendering results in 4 images for you to choose from, and if you need you can adjust the prompt and render new images until you find the right one :)
Pimp my prompt takes your initial prompt and asks an LLM (GPT-4) to generate a more detailed version. This feature works really well with short prompts – for example, consisting of a few keywords. By seeing the “pimped” outputs it can also help you become better acquainted with how to generate performant AI prompts.
This is a great feature when you have an initial top-level idea of what you want, but aren’t sure about the details (e.g. depth, art style, …).
The Video Editor includes 3 components: (1) prompt and instructions, (2) video timeline editor and (3) video preview. Below there’s a short guide on how to use them.
Every time you render a new block of video, your video is automatically saved and made available in your user library. Even if your browser or computer crashes, you can still recover all of your work.
This is a piece of text that serves as a starting point for the AI to generate new content.
This is a constraint or instruction we give to the model to avoid generating certain types of content or output. We can do that by either inserting specific keywords (e.g.: “hands”, “faces”, …) or patterns (e.g.: “No windows in the buildings”).
This parameter allows you to define how much the frames the model is generating resemble each other. The higher the value, the less the new frame will resemble the previous one.
Here are a few examples on how you can use this parameter to your advantage:
When changing the prompt in the video it can also be a good idea to reduce the strength parameter for a while to make the transition smoother
When starting a new video from an uploaded image you might want to start with a lower value and then progressively increase it.
The smooth parameter allows you to control the smoothness of your video as it plays. The scale goes from 0 to 6.
This works through a process called frame interpolation.
This parameter allows you to define how much zoom your next frames will have, i.e. if you want your video motion to close up, you would start increasing the zoom parameter or decreasing if you want to zoom out.
This parameter is set to 0 by default and can be a value between -20 and 20.
At the bottom for the Prompt Settings you can find the start position of your video block and its duration.
Next to these two data points you have a delete button, where you can delete that video block.
The video timeline editor consists of a timeline with 1 queue displaying video and interpolation blocks. This is where you edit your video.
This button triggers the video generation process. You can start and interrupt as needed – oftentimes you’ll start generating frames, and interrupt the generation to adjust the prompt settings (as needed).
Please note that If you re-render previously rendered video, the previous video is overwritten by the new one and lost. Since it’s an AI generating all the content, there is no way to make it generate the exact same content.
Video blocks, as the name suggests, are sections of the video where you can render new frames based on the prompt and instructions you define
Interpolation blocks are sections of video where frame interpolation is performed. We use frame interpolation to generate smooth transitions between two video frames. By default, when you create a new (a) video block we add an interpolation block to ensure there’s some sort of transition between the two (a) video blocks.
When you want new video content to be generated you use (a) video blocks, and when you want to transition between different video content you use (b) interpolation blocks.
In the timeline you can,
add new video blocks by double-clicking in the timeline. When you add a new video block an interpolation block is automatically added between the two video blocks
edit existing video by selecting the video block you want to modify – you can modify its (a) prompt settings, and/or (b) size, and/or (c) duration.
The video preview allows you to watch your video. You also have the ability to move the cursor back to the start or the last generated frame.
The account library stores all your projects - finished or unfinished. You can edit your projects, or download the exported projects.
Note: to be able to download your video, you first need to Export it.