I just stumbled upon this video in my feed and had to share it. Ever since getting acquainted with machine learning I've always wondered if this could be used to generate smoother animation frames, so it's really cool to finally see a tangible attempt. The quality varies a lot in the video but some examples really look incredible out of the box like the MK3 Sektor stun animation.
I think even if you don't get ready-to-use output straight away (which doesn't feel feasible in the first place with how it likely garbles pixels and stuff), this tech could still be a great boon for artists and animators to cut down on what would otherwise be a lot of mundane grunt work filling out the spaces inbetween the important keyframes and just using the generated output as a base to work from.
The project has a patreon which gives you access to builds:
It does seem to be primarily geared towards live action and CG. The problem using this with 2D is that, between keyframes, limbs and other parts sometimes will morph in and out existence - something like CACANI would be more useful currently from an animators perspective. I'm curious to see the results of a model trained specifically for limited 2D anime or the like.
Kind of looks good for a lot of the gooey like animations. For some of the others just forcing stuff to be more fluid made the animation less expressive. Interesting tool though. AI tooling is going to be insane in the next 10 years.
I'm not sure it can or should be used for everything, but I think it does look better than I expected it to and it probably has a lot of practical applications.
Yeah, those actually looked pretty good. I also really liked how the animation of the water looked on the background tiles and some of the sfx. It kind of makes 2D characters look a little too smooth. Its filling in the gaps between animations but not adding detail so the animations lose a bit of life to me.
works much better on sprites that have an unusual amount of high frame count... which is another way to say that most of the examples in the OP are trash since they don't have stuff like knowing when to decelerate or accelerate
This was basically exactly my thought. It's cool if people want to use this to help crank out extra animation frames for basically nothing, but I would never want to see this blanket-used on a sprite based game.
A lot of these originals are already so nicely animated that it seems kind of pointless to interpolate further. The tech looks cool though and surely will have useful applications.
I'll echo that I don't like the ones that end up looking more like flash tweening.
The only problem with this so far is that, unlike real in-betweeners, there's no way for the algorithm to know the intention of the original animation. This means that some timing can be made worse, reducing 'impact' of the animation.
Still, neat. I saw this a few months ago looking into various neural network projects regarding animation.
The way that some of them work really well while others don't indicates that it should be possible to design your animations in a way that this tool will produce good results.
For someone who wants to make a game with sprites at 60+ fps, this could be a great way to do that with much less work
It does seem to be primarily geared towards live action and CG. The problem using this with 2D is that, between keyframes, limbs and other parts sometimes will morph in and out existence - something like CACANI would be more useful currently from an animators perspective. I'm curious to see the results of a model trained specifically for limited 2D anime or the like.
Oh God, I hope the next generation of TVs starts to implement these. I hate the 24fps movie standard so much. Thankfully my Sony has pretty good interpolation and I can watch movies without annoying low framerate stuttering. This was made for me.
They even used GGxrd for this, which makes no sense, since some of those animations have intentional cut frames to give it a more authentic anime feel.
There are some impressive ones, but in some cases I say keep it as is.
Nice!
A couple of them looked a bit odd on close inspection, but most looked amazing.
Loved that one where a hand opens up to reveal a floating ball. Just so smooth!
That's pretty cool, it's fun how it highlights some of the areas that aren't animated a little more.
I love a silky smooth pixel art animation, so it's fascinating to see it across so many pieces here, especially as so much pixel art animation is aping either cost limited anime or memory limited retro console art.
The only problem with this so far is that, unlike real in-betweeners, there's no way for the algorithm to know the intention of the original animation. This means that some timing can be made worse, reducing 'impact' of the animation.
Still, neat. I saw this a few months ago looking into various neural network projects regarding animation.
It looks like it's interpolating pixel-art, but not as pixel-art. Rather as animatable pieces, so there is sub-pixel movement.
It's taking advantage of the readability and texture of pixel-art, but it could be very well be any kind of 2D animation.
It has its place where pixel-art is not shown in 1:1 pixel. Where pixel-art is used out of technical and budgetary limitations.
I think even if you don't get ready-to-use output straight away (which doesn't feel feasible in the first place with how it likely garbles pixels and stuff), this tech could still be a great boon for artists and animators to cut down on what would otherwise be a lot of mundane grunt work filling out the spaces inbetween the important keyframes and just using the generated output as a base to work from.
Oh, I didn't think about how it could help by being a base for the artist to work on later. Interesting idea!
I might take a look at it next time I animate something in pixel-art.
Edit: The last public version at this point is 2.2 and is on Google Drive, but the previous and next one are on Mega. If you get a blocked certificate for mega.nz like I did, it might be your country's ISP refusing to serve it. You can simply change your DNS to solve the issue (for example Google's).
After playing with it, I'm quite impressed by the result on small unscaled pixel-art.
I had a bunch of small and simple 4-frame animations which turned into 32 frames (x8 frames interpolation). I thought it would be too rough to detect the shapes and movements, but it's doing quite well. The result is a mush of pixels that can not be called pixel-art, but it's very fluid and fits the original artistic direction. Look at those tentacles!
→
Of course with so few frames, extreme movements are bound to be misinterpreted:
Here the relief effect on the spine was well translated in the top view, but in the side view one spine is jittering because it could not find which one to align itself on in the repeated pattern. Also the big movement on the claw is totally an illusion, it just blends between frames.
Here it could not guess the finer detail of the side eye looking around in the bottom view so it jumps weirdly around.
The big movement of the claw is hardly detected and the frames just blend into eachother in the front view.
Applied on a 8x zoom of the same sprite, the front view still has a blending effect. The leg on the side view has a teleporting effect because the nails on the foot travelled too far in one frame so it locked on the wrong one.
It can do rotations (here, x4 interpolation) if the starting animation is already very fluid. Otherwise, no.
When you output as .gif, aim for 50fps (20ms/frame) at most, as GIF cannot go faster. At 8x interpolation that means slowing down fast animations to 160ms/frame.
Color index is also a problem, especially on untextured flat colors. "Limit color palette to only use original colors" seems to only work in very specific cases. Maybe it depends on how the .gif is saved? The current result is unuseable as a sprite (bleeds in white background). Edit: NEVERMIND! apparently it's just a bug in the gif export, based on a comment on the patreon. Exporting PNG frames solves it.
Edit2: it was partially fixed in more recent versions. Still dithers but not as bad.
You can see color artefacts on the flat areas, especially the Pink rabbit's ears.
12 images walk cycle into 96, third image is correct color indexation (2 colors), you can see the noise introduced on the border.
10 frames goop loop into 80, you can see the color degradation.
Ultimately like other posters mentionned, it works on simple animation. In my case very rigid walk cycles, and the small size hides the imperfections. If a shape moves too much between frames, you can see it fading in/out or getting blended (claws on the bear in the front animation) because the algorithm cannot guess that it's the same object that traveled too far.
The last bad thing is that it keeps crashing my computer after a few files, even if I'm careful with the size limit. Maybe there's a memory problem.
Still, turning a stiff 4-frames animations (which are often actually 2 unique frames repeated and mirrored) into a fluid 32-frames one is an exploit in my book.
And the indexed PNG export could be touched up to create useable sprites from it.
Update: Here's the potential for an artist:
My current findings for small, cartoonish, low framerate pixel-art.
Middle: Original low-framerate animation.
Method A (Left):
Directly use DainApp on the unscaled Pixel-art
Dirty outlines. Good for when you don't mind having details blurred during movement. Smooth, continuous sub-pixel animation.
Method B (Right):
Resample 4X, pass to DainApp, then Downsample /4 with Nearest Neighbour
Clean outlines. Good for keeping sharp shapes of the original pixel-art. Less smooth, staggered movement.
[method A]←[original]→[B intermediate]→[method B]
A←
→B'
→B
Look at how it animates all the transitions between the faces!
Interpolating (X4) over 4-frame low-speed animations: look how much more is generated!
A←
→B
A←
→B
Method B keeps outlines and details intact so it's closer to "proper pixel-art".
Method A looks better aesthetically speaking in 1x scale, as it applies antialiasing on the movement.
Bump. If you want to see the potential for an artist:
1. Initial 4 frames animation with base colors.
2. Generated 32 frames animation (PNG output) with indexed colors.
3. Cleaned up manually the outlines in ~20 minutes.
1.
2.
3.
The process could be further optimised. Right now the outline gets antialiased with the colors from the image, ending with a messy result. (Here the grey from the teeth and the green from the skin were used outside.) Maybe by providing a separate gradient palette for the outlines exclusively, it will be easier to clean up automatically (replace color).
If the program handled transparency/alpha channel, it would also be nice.
Edit:
Left: Original 4 frames. Middle: DainApp output. Right: cleanup.
The original animation for the tentacle has 6 frames. The resulting animaiton has 48.
Edit:
UpScaleX8 → DainAPP→DownScale/8 might also be a good way to do it
It does seem to be primarily geared towards live action and CG. The problem using this with 2D is that, between keyframes, limbs and other parts sometimes will morph in and out existence - something like CACANI would be more useful currently from an animators perspective. I'm curious to see the results of a model trained specifically for limited 2D anime or the like.
After playing with it, I'm quite impressed by the result on small unscaled pixel-art.
I had a bunch of small and simple 4-frame animations which turned into 32 frames (x8 frames interpolation). I thought it would be too rough to detect the shapes and movements, but it's doing quite well. The result is a mush of pixels that can not be called pixel-art, but it's very fluid and fits the original artistic direction. Look at those tentacles!
Of course with so few frames, extreme movements are bound to be misinterpreted:
Here the relief effect on the spine was well translated in the top view, but in the side view one spine is jittering because it could not find which one to align itself on in the repeated pattern. Also the big movement on the claw is totally an illusion, it just blends between frames.
Here it could not guess the finer detail of the side eye looking around in the bottom view so it jumps weirdly around.
The big movement of the claw is hardly detected and the frames just blend into eachother in the front view.
Applied on a 8x zoom of the same sprite, the front view still has a blending effect. The leg on the side view has a teleporting effect because the nails on the foot travelled too far in one frame so it locked on the wrong one.
It can do rotations (here, x4 interpolation) if the starting animation is already very fluid. Otherwise, no.
When you output as .gif, aim for 50fps (20ms/frame) at most, as GIF cannot go faster. At 8x interpolation that means slowing down fast animations to 160ms/frame.
Color index is also a problem, especially on untextured flat colors. "Limit color palette to only use original colors" seems to only work in very specific cases. Maybe it depends on how the .gif is saved? The current result is unuseable as a sprite (bleeds in white background).
Edit: apparently it's a bad implementation of the gif export, based on a comment on the patreon. Exporting PNG frames solves it.
You can see color artefacts on the flat areas, especially the Pink rabbit's ears.
12 images walk cycle into 96, third image is correct color indexation (2 colors), you can see the noise introduced on the border.
10 frames goop loop into 80, you can see the color degradation.
Ultimately like other posters mentionned, it works on simple animation. In my case very rigid walk cycles, and the small size hides the imperfections. If a shape moves too much between frames, you can see it fading in/out or getting blended (claws on the bear in the front animation) because the algorithm cannot guess that it's the same object that traveled too far.
The last bad thing is that it keeps crashing my computer after a few files, even if I'm careful with the size limit. Maybe there's a memory problem.
Still, turning a stiff 4-frames animations (which are often actually 2 unique frames repeated and mirrored) into a fluid 32-frames one is an exploit in my book.
And the indexed PNG export could be touched up to create useable sprites from it.
I'm far from knowledgable when it comes to animation, but for the time being it looks like something that's handy just in case, rather than something you can rely on to make stuff work.
In a couple of years time it could be different though, by 2025 there'll likely be big steps on this front, and it'll be interesting to see where it goes :)
i definitely see the potential in the tech but a lot of the examples in the video kinda miss the mark
however, I don't think most artists would use this to complete replace their animations, but as tool to help smooth things out and hash out the tweens a bit
that said, if your goal is uncanny smooth animation? seem like an actually good tool. Stuff like Chris's moonwalk works even better when it's eerily smooth
Nitpicker_Red 's example is about what i'd expect in how it'd be used
generating frames, then the artist cleans them up and sees what they wanna keep
Imagine if we can teach computer to draw in between frames for features and TV. Should be an improvement compared to using 3D models and try to make them look 2D.
What binaries are people using (don't know how to compile)? I'm using Dain App but wondering if there is a better one. I'm getting out of memory errors with a GTX 1060. Is my card just too lame?
I wonder how fucking uncanny valley Alucards animation would look with this. He already has an absurdly smooth running animation which kind of helps sell the fact that hes not entirely human which makes it feel like hes gliding along the ground using his vampiric powers as much as he is running.