00:00
00:00
View Profile Pahgawk
I'm an artist and programmer. I make websites, music, paintings, games, apps, and of course, animations.

Dave Pagurek @Pahgawk

23, Male

Ottawa, Canada

Joined on 2/8/09

Level:
7
Exp Points:
442 / 550
Exp Rank:
109,889
Vote Power:
4.88 votes
Rank:
Civilian
Global Rank:
79,392
Blams:
6
Saves:
69
B/P Bonus:
0%
Whistle:
Normal
Trophies:
30
Medals:
69

Comments (7)

What about VR animation? Like Mindshow VR. There is huge potential there as a replacement perhaps for mocap or is it too basic to do that?

Another idea, what if there was a marketplace of sorts, sort of like Splice, where you pay a monthly fee, and in return you can drag and drop animation sequences straight into your project?

That would be interesting for sure and speed up workflow!

There's definitely a lot that can be done in VR! At SIGGRAPH last summer I saw a music video for Thriller that was drawn and animated all in VR. it's a new domain with lots of room for experimentation. Actually, one project out of the lab I'll be joining in the fall involves doing 3d modelling in VR by painting a shell of a shape with "ribbons", which are then converted into a proper watertight mesh suitable for stuff like 3d printing.

The submerged cathedral render is actually really impressive, considering the route you took to achieve it :)

Thanks!

Procedural modelling hmm... sounds a bit like the next level of motion tweens huh? :) Don't animate nearly enough to contribute a good answer here, and it's mostly all FBF when I do, but this was an interesting read. Cool Cathedral too. Something so atmospheric about pretty much any kind of architecture submerged like that.

Good Luck on that offer! Sounds like good things all over.

Haha essentially! Computer assisted stuff is always sort of hard because so much of the time, it stands out for having such a different feel from the handmade parts. I think I aspire to integrate things as well as Studio Ghibli does -- the black spaghettilike texture of the monster in Princess Mononoke is an example of procedural generation and a particle system integrated really well. Maybe the thing that computers excel at is in repeating a handmade pattern at a large scale so that you can add way more detail than you'd be able to do manually, or where it looks better than if you just used copy-and-pasting.

have you ever done an animation on power point
that's the future

Ah, interesting, I thought Studio Gibli in particular really stuck with the traditional methods, but it's been a while since I saw some of their work. Might not have been as perceptive when I did...

In a lot of anime you see that kind of stuff NOT working well with the handmade bits at all. :) Especially around the millennium shift. It's improved to the point I don't mind it as much now, even though you usually do make the distinction. The latest episodes of One Piece, for example, seem to work with models for ships and basically any kind of vehicle in motion, whereas characters and other backgrounds all follow the traditional style. Fluid substances like water sometimes seem computer generated, and fighting sequences are becoming a bit of a mixture. It's pretty cool to follow.

Must admit I'm a pretty die hard fan of the everything-manual approach, though. :) But definately useful technological advances. Would be pretty cool to see some AI in regard to animation too, one that'd be able to emulate the traditional methods in a more automated way. Both a bit scared (that it'd be better than we can ever make ourselves) and intrigued to see what that could accomplish.

dave! been too long.

i’ve been getting into programming, ai, and vr over the past few years with a focus in automation. my vote for most important automation in traditional 2d - and probably the most interesting from a programming standpoint - has to be cleanup.

seems like the solution would be an interesting mix of pose estimation, segmentation, and sketch simplification - all of which have quite a few fantastic papers to reference as a starting point.

for coloring - i imagine a generative cnn could learn to loosely apply the lighting of a 3d reference image to a 2d scene, essentially a style transfer. once you’ve applied lighting to a 2d scene, you could then do a second style transfer on top of that to create a textured/hand drawn/painted look. could be a fun thing to experiment with.

anyways, stay in touch - congrats on grad school!

Right, agreed! One of the projects that came out of the lab I'll be going to starts tackling the sketch cleanup problem actually: https://www.cs.ubc.ca/labs/imager/tr/2018/StrokeAggregator/ There's a lot of work on things like this that look primarily at static images, so I'd like to take a look at what else it can do when it's got a sequence. If it can pick out poses over time, that can be really useful for making sure your motion curves are smooth.

Where are you at these days? It's been forever!

@Pahgawk i almost want to think that image sequences should make for an easier solution. a sequence would help to both separate noise from art and should help flesh out a truer representation of what the artist is going for.

the problem i run into in a lot of my single-image based solutions is that artists are rarely consistent in the amount of detail in a given frame of animation - some frames will have a fully constructed character and some are pretty much just loose scribbles. being able to input several frames together - or better yet pass in some type of character model reference - would hopefully allow for a better cleanup.

i'm over at titmouse in LA! doing a mix of everything from vr to traditional animation.