• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

lazygecko

Member
Oct 25, 2017
3,628


I just stumbled upon this video in my feed and had to share it. Ever since getting acquainted with machine learning I've always wondered if this could be used to generate smoother animation frames, so it's really cool to finally see a tangible attempt. The quality varies a lot in the video but some examples really look incredible out of the box like the MK3 Sektor stun animation.

I think even if you don't get ready-to-use output straight away (which doesn't feel feasible in the first place with how it likely garbles pixels and stuff), this tech could still be a great boon for artists and animators to cut down on what would otherwise be a lot of mundane grunt work filling out the spaces inbetween the important keyframes and just using the generated output as a base to work from.

The project has a patreon which gives you access to builds:
 

TeenageFBI

One Winged Slayer
Member
Oct 25, 2017
10,239
Some of those animations translate far better than others but it sure looks like a good way to help animators hash out some tweens.
 
Oct 28, 2017
1,219
Another demonstration:



Research paper is also here:

It does seem to be primarily geared towards live action and CG. The problem using this with 2D is that, between keyframes, limbs and other parts sometimes will morph in and out existence - something like CACANI would be more useful currently from an animators perspective. I'm curious to see the results of a model trained specifically for limited 2D anime or the like.
 

Holundrian

Member
Oct 25, 2017
9,153
Kind of looks good for a lot of the gooey like animations. For some of the others just forcing stuff to be more fluid made the animation less expressive. Interesting tool though. AI tooling is going to be insane in the next 10 years.
 

Thera

Banned
Feb 28, 2019
12,876
France
I don't like it for most of them. It is the same effect than True Motion or any crappy method like that you have on TV.
It is impressive thought.
 

VariantX

Member
Oct 25, 2017
16,886
Columbia, SC
I'm not sure it can or should be used for everything, but I think it does look better than I expected it to and it probably has a lot of practical applications.
 

Ferrio

Member
Oct 25, 2017
18,065
To me that's a disservice to pixel art. I'll take original artist sprites over that any day.
 

nded

Member
Nov 14, 2017
10,573
Yeah, from a production standpoint it seems like it could be helpful for generating in-between frames.
 

VariantX

Member
Oct 25, 2017
16,886
Columbia, SC
Mortal Kombat actor captures looked more life-like.

Yeah, those actually looked pretty good. I also really liked how the animation of the water looked on the background tiles and some of the sfx. It kind of makes 2D characters look a little too smooth. Its filling in the gaps between animations but not adding detail so the animations lose a bit of life to me.
 

Platy

Member
Oct 25, 2017
27,691
Brazil
works much better on sprites that have an unusual amount of high frame count... which is another way to say that most of the examples in the OP are trash since they don't have stuff like knowing when to decelerate or accelerate
 

Derachi

Member
Oct 27, 2017
7,699
Some of those animations translate far better than others but it sure looks like a good way to help animators hash out some tweens.
This was basically exactly my thought. It's cool if people want to use this to help crank out extra animation frames for basically nothing, but I would never want to see this blanket-used on a sprite based game.
 
Oct 25, 2017
9,103
A lot of these originals are already so nicely animated that it seems kind of pointless to interpolate further. The tech looks cool though and surely will have useful applications.

I'll echo that I don't like the ones that end up looking more like flash tweening.
 

MoogleMaestro

Member
Oct 25, 2017
1,110
The only problem with this so far is that, unlike real in-betweeners, there's no way for the algorithm to know the intention of the original animation. This means that some timing can be made worse, reducing 'impact' of the animation.

Still, neat. I saw this a few months ago looking into various neural network projects regarding animation.
 

MP!

Member
Oct 30, 2017
5,198
Las Vegas
It looses the original charm... but I feel this has application somewhere... the effects animation is really nice for example
 

Mzril

Attempted to circumvent ban with alt account
Banned
Oct 26, 2017
435
That one ghost one looked off. But in a really creepy but good way.

Like a combination of smooth and rigid that was unsettling.
 

jotun?

Member
Oct 28, 2017
4,497
The way that some of them work really well while others don't indicates that it should be possible to design your animations in a way that this tool will produce good results.

For someone who wants to make a game with sprites at 60+ fps, this could be a great way to do that with much less work
 
Jan 21, 2019
2,902
Another demonstration:



Research paper is also here:

It does seem to be primarily geared towards live action and CG. The problem using this with 2D is that, between keyframes, limbs and other parts sometimes will morph in and out existence - something like CACANI would be more useful currently from an animators perspective. I'm curious to see the results of a model trained specifically for limited 2D anime or the like.

Oh God, I hope the next generation of TVs starts to implement these. I hate the 24fps movie standard so much. Thankfully my Sony has pretty good interpolation and I can watch movies without annoying low framerate stuttering. This was made for me.
 

toy_brain

Member
Nov 1, 2017
2,207
Nice!
A couple of them looked a bit odd on close inspection, but most looked amazing.
Loved that one where a hand opens up to reveal a floating ball. Just so smooth!
 

EvilBoris

Prophet of Truth - HDTVtest
Verified
Oct 29, 2017
16,684
That's pretty cool, it's fun how it highlights some of the areas that aren't animated a little more.

I love a silky smooth pixel art animation, so it's fascinating to see it across so many pieces here, especially as so much pixel art animation is aping either cost limited anime or memory limited retro console art.
 

Efejota

Member
Mar 13, 2018
3,750
The only problem with this so far is that, unlike real in-betweeners, there's no way for the algorithm to know the intention of the original animation. This means that some timing can be made worse, reducing 'impact' of the animation.

Still, neat. I saw this a few months ago looking into various neural network projects regarding animation.
On the other hand, using this as a base could be good in my opinion. The artists could always tweak those extra frames after the AI creates them.
 

Nitpicker_Red

Member
Nov 3, 2017
1,282
It looks like it's interpolating pixel-art, but not as pixel-art. Rather as animatable pieces, so there is sub-pixel movement.
It's taking advantage of the readability and texture of pixel-art, but it could be very well be any kind of 2D animation.

It has its place where pixel-art is not shown in 1:1 pixel. Where pixel-art is used out of technical and budgetary limitations.
I think even if you don't get ready-to-use output straight away (which doesn't feel feasible in the first place with how it likely garbles pixels and stuff), this tech could still be a great boon for artists and animators to cut down on what would otherwise be a lot of mundane grunt work filling out the spaces inbetween the important keyframes and just using the generated output as a base to work from.
Oh, I didn't think about how it could help by being a base for the artist to work on later. Interesting idea!
I might take a look at it next time I animate something in pixel-art.
RA1kRoS.gif
9VGIgES.gif


Edit: The last public version at this point is 2.2 and is on Google Drive, but the previous and next one are on Mega. If you get a blocked certificate for mega.nz like I did, it might be your country's ISP refusing to serve it. You can simply change your DNS to solve the issue (for example Google's).
 
Last edited:

Hampig

Member
Oct 25, 2017
1,703
Not a fan of it. For some of the things it looks nice, but I'd prefer to just see the original in most cases.
 

Nitpicker_Red

Member
Nov 3, 2017
1,282
After playing with it, I'm quite impressed by the result on small unscaled pixel-art.
I had a bunch of small and simple 4-frame animations which turned into 32 frames (x8 frames interpolation). I thought it would be too rough to detect the shapes and movements, but it's doing quite well. The result is a mush of pixels that can not be called pixel-art, but it's very fluid and fits the original artistic direction. Look at those tentacles!
N0LBNwP.gif
gS5MrZB.gif

jaUo9b4.gif
79JCzQo.gif
SCYIjma.gif
bEKd3oH.gif
OfbyZrj.gif
iETB1a1.gif
XyTCtzG.gif
kUeuq9Y.gif
gJOBsr0.gif
EWARxVi.gif
Z4IgOLd.gif
nNKOpZs.gif
xjERKCw.gif
tlQym9R.gif
7xsS8es.gif
tC3jXNp.gif
QryXWnx.gif
vDoIiaK.gif
T9kPFXX.gif
jqnbkXF.gif
Of course with so few frames, extreme movements are bound to be misinterpreted:
Here the relief effect on the spine was well translated in the top view, but in the side view one spine is jittering because it could not find which one to align itself on in the repeated pattern. Also the big movement on the claw is totally an illusion, it just blends between frames.
lblwaAe.gif
Mw9drbj.gif

Here it could not guess the finer detail of the side eye looking around in the bottom view so it jumps weirdly around.
8e2qYB6.gif
u9slt9g.gif

The big movement of the claw is hardly detected and the frames just blend into eachother in the front view.
S65xu6L.gif
1SGH335.gif

Applied on a 8x zoom of the same sprite, the front view still has a blending effect. The leg on the side view has a teleporting effect because the nails on the foot travelled too far in one frame so it locked on the wrong one.


It can do rotations (here, x4 interpolation) if the starting animation is already very fluid. Otherwise, no.
mf6HeIs.gif
150Bu8b.gif
Nox8R0r.gif
fqknAwv.gif
rJIjGgR.gif
0D9MuHu.gif

When you output as .gif, aim for 50fps (20ms/frame) at most, as GIF cannot go faster. At 8x interpolation that means slowing down fast animations to 160ms/frame.

Color index is also a problem, especially on untextured flat colors. "Limit color palette to only use original colors" seems to only work in very specific cases. Maybe it depends on how the .gif is saved? The current result is unuseable as a sprite (bleeds in white background).
Edit: NEVERMIND! apparently it's just a bug in the gif export, based on a comment on the patreon. Exporting PNG frames solves it.
Edit2: it was partially fixed in more recent versions. Still dithers but not as bad.

You can see color artefacts on the flat areas, especially the Pink rabbit's ears.
4Jik1oo.gif
NjJNAnY.gif
ADUo9mP.gif
3KwUIT0.gif
65dpJAH.gif
pdJ1Cix.gif

12 images walk cycle into 96, third image is correct color indexation (2 colors), you can see the noise introduced on the border.
5NpK63Y.gif
QZ3Dczh.gif
YP3HzRf.gif

10 frames goop loop into 80, you can see the color degradation.
aW6IFJG.gif
Wq1qtXo.gif

Ultimately like other posters mentionned, it works on simple animation. In my case very rigid walk cycles, and the small size hides the imperfections. If a shape moves too much between frames, you can see it fading in/out or getting blended (claws on the bear in the front animation) because the algorithm cannot guess that it's the same object that traveled too far.

The last bad thing is that it keeps crashing my computer after a few files, even if I'm careful with the size limit. Maybe there's a memory problem.

Still, turning a stiff 4-frames animations (which are often actually 2 unique frames repeated and mirrored) into a fluid 32-frames one is an exploit in my book.
And the indexed PNG export could be touched up to create useable sprites from it.
 
Last edited:

Nitpicker_Red

Member
Nov 3, 2017
1,282
Update: Here's the potential for an artist:
My current findings for small, cartoonish, low framerate pixel-art.

Middle: Original low-framerate animation.

Method A (Left):
Directly use DainApp on the unscaled Pixel-art
Dirty outlines. Good for when you don't mind having details blurred during movement. Smooth, continuous sub-pixel animation.

Method B (Right):
Resample 4X, pass to DainApp, then Downsample /4 with Nearest Neighbour
Clean outlines. Good for keeping sharp shapes of the original pixel-art. Less smooth, staggered movement.

[method A]←[original]→[B intermediate]→[method B]
ZF5yoQT.gif
A←
HLG1gFi.gif
→B'
RsnAvA5.gif
→B
YLEZryq.gif


Look at how it animates all the transitions between the faces!
BfW3hd0.gif


Interpolating (X4) over 4-frame low-speed animations: look how much more is generated!
f2X20bF.gif
A←
YT16aku.gif
→B
XY14eCd.gif
Q7BF33e.gif
A←
xjTIyKV.gif
→B
PJhzpmf.gif


Method B keeps outlines and details intact so it's closer to "proper pixel-art".
Method A looks better aesthetically speaking in 1x scale, as it applies antialiasing on the movement.

Edit2: wrote a tutorial here about DAin-App and pixel-art: https://justpaste.it/5dvd7


Bump. If you want to see the potential for an artist:
1. Initial 4 frames animation with base colors.
2. Generated 32 frames animation (PNG output) with indexed colors.
3. Cleaned up manually the outlines in ~20 minutes.
1.
pS97VSx.gif
2.
GgHdJBH.gif
3.
GPeazOB.gif


The process could be further optimised. Right now the outline gets antialiased with the colors from the image, ending with a messy result. (Here the grey from the teeth and the green from the skin were used outside.) Maybe by providing a separate gradient palette for the outlines exclusively, it will be easier to clean up automatically (replace color).

If the program handled transparency/alpha channel, it would also be nice.

Edit:
Left: Original 4 frames. Middle: DainApp output. Right: cleanup.
dtSI90Y.gif

The original animation for the tentacle has 6 frames. The resulting animaiton has 48.
5HKPa2N.gif

Edit:
UpScaleX8 → DainAPP→DownScale/8 might also be a good way to do it
 
Last edited:

B.K.

Member
Oct 31, 2017
17,031
Another demonstration:



Research paper is also here:

It does seem to be primarily geared towards live action and CG. The problem using this with 2D is that, between keyframes, limbs and other parts sometimes will morph in and out existence - something like CACANI would be more useful currently from an animators perspective. I'm curious to see the results of a model trained specifically for limited 2D anime or the like.


That still looks like total shit to me.
 

Orioto

Member
Oct 26, 2017
4,716
Paris
After playing with it, I'm quite impressed by the result on small unscaled pixel-art.
I had a bunch of small and simple 4-frame animations which turned into 32 frames (x8 frames interpolation). I thought it would be too rough to detect the shapes and movements, but it's doing quite well. The result is a mush of pixels that can not be called pixel-art, but it's very fluid and fits the original artistic direction. Look at those tentacles!
N0LBNwP.gif
gS5MrZB.gif
jaUo9b4.gif
79JCzQo.gif
SCYIjma.gif
bEKd3oH.gif
OfbyZrj.gif
iETB1a1.gif
XyTCtzG.gif
kUeuq9Y.gif
gJOBsr0.gif
EWARxVi.gif
Z4IgOLd.gif
nNKOpZs.gif
xjERKCw.gif
tlQym9R.gif
7xsS8es.gif
tC3jXNp.gif
QryXWnx.gif
vDoIiaK.gif
T9kPFXX.gif
jqnbkXF.gif
Of course with so few frames, extreme movements are bound to be misinterpreted:
Here the relief effect on the spine was well translated in the top view, but in the side view one spine is jittering because it could not find which one to align itself on in the repeated pattern. Also the big movement on the claw is totally an illusion, it just blends between frames.
lblwaAe.gif
Mw9drbj.gif

Here it could not guess the finer detail of the side eye looking around in the bottom view so it jumps weirdly around.
8e2qYB6.gif
u9slt9g.gif

The big movement of the claw is hardly detected and the frames just blend into eachother in the front view.
S65xu6L.gif
1SGH335.gif

Applied on a 8x zoom of the same sprite, the front view still has a blending effect. The leg on the side view has a teleporting effect because the nails on the foot travelled too far in one frame so it locked on the wrong one.


It can do rotations (here, x4 interpolation) if the starting animation is already very fluid. Otherwise, no.
mf6HeIs.gif
150Bu8b.gif
Nox8R0r.gif
fqknAwv.gif
rJIjGgR.gif
0D9MuHu.gif

When you output as .gif, aim for 50fps (20ms/frame) at most, as GIF cannot go faster. At 8x interpolation that means slowing down fast animations to 160ms/frame.

Color index is also a problem, especially on untextured flat colors. "Limit color palette to only use original colors" seems to only work in very specific cases. Maybe it depends on how the .gif is saved? The current result is unuseable as a sprite (bleeds in white background).
Edit: apparently it's a bad implementation of the gif export, based on a comment on the patreon. Exporting PNG frames solves it.
You can see color artefacts on the flat areas, especially the Pink rabbit's ears.
4Jik1oo.gif
NjJNAnY.gif
ADUo9mP.gif
3KwUIT0.gif
65dpJAH.gif
pdJ1Cix.gif

12 images walk cycle into 96, third image is correct color indexation (2 colors), you can see the noise introduced on the border.
5NpK63Y.gif
QZ3Dczh.gif
YP3HzRf.gif

10 frames goop loop into 80, you can see the color degradation.
aW6IFJG.gif
Wq1qtXo.gif

Ultimately like other posters mentionned, it works on simple animation. In my case very rigid walk cycles, and the small size hides the imperfections. If a shape moves too much between frames, you can see it fading in/out or getting blended (claws on the bear in the front animation) because the algorithm cannot guess that it's the same object that traveled too far.

The last bad thing is that it keeps crashing my computer after a few files, even if I'm careful with the size limit. Maybe there's a memory problem.

Still, turning a stiff 4-frames animations (which are often actually 2 unique frames repeated and mirrored) into a fluid 32-frames one is an exploit in my book.
And the indexed PNG export could be touched up to create useable sprites from it.


I wasn't convinced but the way you use it it's super impressive
 

Temascos

Member
Oct 27, 2017
12,519
I'm far from knowledgable when it comes to animation, but for the time being it looks like something that's handy just in case, rather than something you can rely on to make stuff work.

In a couple of years time it could be different though, by 2025 there'll likely be big steps on this front, and it'll be interesting to see where it goes :)
 

Garrod Ran

self-requested ban
Banned
Mar 23, 2018
16,203
i definitely see the potential in the tech but a lot of the examples in the video kinda miss the mark
however, I don't think most artists would use this to complete replace their animations, but as tool to help smooth things out and hash out the tweens a bit

that said, if your goal is uncanny smooth animation? seem like an actually good tool. Stuff like Chris's moonwalk works even better when it's eerily smooth

Nitpicker_Red 's example is about what i'd expect in how it'd be used
generating frames, then the artist cleans them up and sees what they wanna keep
 
Oct 29, 2017
13,502
Imagine if we can teach computer to draw in between frames for features and TV. Should be an improvement compared to using 3D models and try to make them look 2D.
 

Bearwolf

Member
Oct 27, 2017
477
What binaries are people using (don't know how to compile)? I'm using Dain App but wondering if there is a better one. I'm getting out of memory errors with a GTX 1060. Is my card just too lame?
 

VariantX

Member
Oct 25, 2017
16,886
Columbia, SC
I wonder how fucking uncanny valley Alucards animation would look with this. He already has an absurdly smooth running animation which kind of helps sell the fact that hes not entirely human which makes it feel like hes gliding along the ground using his vampiric powers as much as he is running.