Pahgawk's News

How 3D Rendering Works

2017-06-08 21:24:01 by Pahgawk
Updated

Recently I went about making a 3D renderer (specifically, a raytracer.) Before researching, it seemed like a daunting task, shrouded in the promise of complicated math and a variety subtle, deeply-hidden bugs. As it turns out, there is a fair amount of linear algebra, and it is hard to tell if your glass rendering is weird or if glass itself is just weird. That said, the core concepts are more accessible than I thought they would be. Having used a 3D rendering system before as an artist, I found it particularly interesting seeing how it worked from the inside, and I thought maybe the joint artist/programmer community of Newgrounds might find it interesting too.

My code examples are in Swift because that's what I used to write my renderer, but the concepts are universal. (Also, is there a way to add monospace text to a Newgrounds news post?)

 

What is 3D rendering, anyway?

We can think of the camera as a 2D rectangle in 3D space. It's like the film of a camera: when a photon hits this rectangle, we see its colour value in the location that it hit the film. The image we get from a camera is a 2D projection of the 3D world onto the rectangle of that piece of film.

2731551_149697144163_projection.png

You can draw a line from a focal point behind the film to a point on an object. Where that line intersects the film is where the projection of that point in 2D will be.

The end goal of a 3D renderer is to make that 2D image of 3D space.

 

How does raytracing fit in?

There are plenty of ways you can write a 3D renderer. Some of them are better suited for fast render times for applications like gaming. Others, like raytracing, take longer to compute but can often model reality more realistically. Raytracing can take into account more complicated reflections of light off of other objects, soft shadows, lens blur, and more.

In real life, light is emitted from a source as a photon of a certain colour. It then will travel in a straight line (well, mostly) until it hits something. Then, one of a few things can happen to the photon. It can get reflected in a direction, its path can be bent from refraction, or it can be absorbed by the material. Some of the photons eventually bounce their way to a camera, where they are "recorded."

Raytracing models this behaviour, but in reverse. Photons are cast from the camera, and bounce around the surfaces in a scene until they hit a light source. At that point, the photon's colour is recorded. It would also work if you cast rays from light sources to the camera, like in real life, but this tends to not be as efficient since so many photons just won't reach the camera.

2731551_149697144151_cast-bounce.png

Starting from the focal point and going through a coordinate (x, y) on the film, a ray is cast, which bounces until it hits a light source to find its colour.

So, for each pixel in the image you want to render, here's what we do:

  1. Cast a ray of white light from a pixel
  2. Find the first object the ray intersects with
  3. If it is a light source, multiply the ray's color with the light source's colour to get the pixel colour
  4. Otherwise, reflect, refract or absorb the ray, and go back to step 2 with the resulting ray

 

Modelling geometry

Here's where some math happens. How do we determine if a ray hits an object? First, let's model a ray. A ray in 3D space can be defined as a direction vector, and a point that it goes through. Both this 3D point and vector can be represented by an (x, y, z) coordinate, but we're actually going to use a 4-vector (x, y, z, w).

Why the extra coordinate? You can certainly use a normal 3-vector, but then it's up to you to keep track of which 3-vector in your program is a point and which is a direction. If you use a 4-vector, though, w = 0 implies that it is a direction vector and w = 1 implies that it is a point, which makes things work out pretty nicely. If you add a point and a vector, their w coordinates add to 1 + 0 = 1, meaning the result is still a point. A vector minus a vector is still a vector, and 0 - 0 = 0. A point minus a point is a vector, and 1 - 1 = 0. A point plus a point doesn't make sense, which would leave you with a w value of 2, which is also unexpected. When we use transformation matrices later, they Just Work™ with this way of modelling points and vectors (for example, scaling a vector changes its size, and scaling a point does nothing.) It's convenient.

So, we've got this definition of a ray:

  struct Ray {
    let point, direction: Vector4
    let color: Color
  }

Then, for any given point on the film and focal point behind the film, we can cast an initial ray:

  func castRay(from: Vector4, through: Vector4) -> Ray {
    return Ray(
      point: from,
      direction: through - from,
      color: Color(0xFFFFFF) // Start with white light
    )
  }

To see if it intersects with an object, we need an object to model. A sphere is a nice one, since we only need a few small pieces to represent it:

  struct Sphere {
    let center: Vector4
    let radius: Float
  }

We can then make a function pretty easily to check whether or not a ray intersects with a sphere by using the equations given in the Wikipedia article for line-sphere intersections. We can make it return an Intersection (the point of intersection and normal from the surface at that point) if it exists, or nil otherwise. If we have multiple spheres, we want the first one the ray intersects with, so you can iterate through the spheres and take the one that's the shortest distance from the ray origin. Obviously this isn't the most efficient, but works for small scenes:

  struct Intersection {
    let point, normal: Vector4
  }

  func firstIntersection(ray: Ray, spheres: [Sphere]) -> Intersection? {
    return spheres.flatMap{ (sphere: Sphere) -> Intersection? in
      return intersectionBetween(ray: ray, sphere: sphere)
    }.sorted{ (a: Vector4, b: Vector4) -> Bool in
      (a - ray.point).length < (b - ray.point).length
    }.first
  }

 

Modelling materials

Once we've found an intersection, we are tasked with bouncing the light ray. How this happens depends on the material the ray intersected with. A material, for our purposes, must be able to take an incoming ray and an intersection and return a bounced ray.

  protocol Material {
    func bounce(ray: Ray, intersection: Intersection) -> Ray
  }

How the ray gets bounced affects what the object looks like. To make shadows, we know that some photons need to get absorbed somehow. Each time a ray is bounced, we can dim the intensity of the light of the outgoing ray a bit (for example, multiply the red, green, and blue fields by 0.7.) The more bounces the light goes through, the darker the colour becomes. If no intersection is found, we can multiply the light colour by some background colour and stop bouncing, as if there is sky in every direction as a source of photons.

If a ray does hit an object, we have to think about what direction we want to bounce the ray in. Reflective materials abide by the tenth grade science class mantra, the angle of incidence equals the angle of reflection. That is to say, if you reflect the incoming ray about the surface normal, you're going to make a mirrorlike material. If instead you choose to reflect the light in a totally random direction, you've diffused the light and mate a matte material (although, make sure you absorb the ray if it is randomly bounced into the inside of the sphere.) A surface that reflects rays but with a little bit of random variation will look like brushed or frosted metal.

2731551_149697144071_materials.png

The more random variation there is in the bounced ray, the more matte the light on the material looks.

 

Monte Carlo rendering

You'll notice that the scene looks pretty grainy, specifically around areas that should be blurred. This is because, for each pixel, we randomly bounce a photon around. It's bound to not be quite smooth because of the random variation. To make it smoother, we can simply render each pixel multiple times and average the values. This is an example of a Monte Carlo algorithm. From Wikipedia, a Monte Carlo algorithm "uses random numbers to produce an outcome. Instead of having fixed inputs, probability distributions are assigned to some or all of the inputs." The more averaged samples we take of the image, the closer to an actual "perfect render" we get. The random grains, averaged together, end up looking like a smooth blur.

We can make more complicated materials with this sampling technique by having it, for example, reflect a photon some percent of the time and refract it the rest of the time. Having a higher probability of reflecting at steeper angles is a good way to create realistic-looking glass. You can make glossy materials by having a small probability of reflection and a higher probability of diffusing the light.

 

Motion blur

Another cool thing we can do using the Monte Carlo technique is create motion blur. In real life, cameras have their shutters open for a real, non-infinitesimal amount of time. The longer the film is exposed to photons, the more photons hit it, and the brighter an image you get. If an object is moving while the film is exposed, photons reflected from all points in time along the object's trajectory will end up on the film, resulting in the object appearing smeared.

We can model this in our raytracer, too. Let's say a sphere moves from point A to point B while our virtual camera shutter is open. For every ray we cast, before we check for intersections between the ray and the sphere, we pick a random point along the object's trajectory for it to be at, and use this version of the object for collisions. We use a different random location for the next ray. After doing enough samples of this, we should end up with a nice blur.

In order to actually implement this, we need to represent the object's motion. A transformation matrixworks well for this purpose. When you multiply a matrix by a vector, you get a different vector. A simple one is the translation matrix:

2731551_149697144013_transformation.png

Applying a translation transformation to a point to produce a new point

The end result is a shifted coordinate. You can also create rotation, stretch, and skew matrices. By multiplying matrices together, you compose the transformations. You can invert a transformation by inverting its transformation matrix.

So, back to our motion blur. The camera shutter is open as an object moves from A to B, and we can represent A and B using transformation matrices. If you want to find a version of an object at a random point on its trajectory to check collisions with, you can interpolate between them:

func randomOnTrajectory(object: Sphere, from: Matrix4, to: Matrix4) -> Sphere { let amount = randomBetween(low: 0, high: 1) let transformation = from*amount + to*(1-amount) return Sphere( center: transformation * object.center, radius: object.radius ) }

That gives you a result like this:

2731551_149697144033_blur.png

A sphere moving across the x axis while the virtual shutter is open

 

Parallelizing

Because it takes multiple samples to get a good looking result, it would make sense to try to get as much throughput as possible while rendering. The great thing about raytracing is that each sample is calculated completely separately from other samples (there's a technical term for this, and it is, no joke, referred to as "embarassingly parallel".) You can concurrently map by running each sample in a separate thread, and when each is done, reduce by averaging them into a final result.

 

Going further

After implementing everything so far, there is still plenty that you can add. For example, most 3D models aren't made from spheres, so it would be helpful to be able to render polygonal meshes. By jittering the angle of each ray cast slightly, you can make a nice depth of field effect where objects closer and further from the camera than a focal lenth appear more blurred. You can try rendering gaseous volumes rather than just solids. You can subdivide the space in front of the camera so that you don't have to check collisions with every object on the screen.

The code for the raytracer I wrote is available on GitHub for reference, although it is also still a work in progress. It's incredibly rewarding to write programs like this where the feedback is so visual, so I encourage you to try it yourself too! I hope the topics I've covered so far are enough to shed light on what goes into raytracing. Pun intended, of course.


Hello, I'm back

2017-04-04 17:02:07 by Pahgawk
Updated

Hello again, it's been a while! I made an animation for the first time in a few years:

So how did I arrive here? Why did this take so long?

Well, let's go back to high school. After finishing NATA in 2013, I was pretty burnt out. I was also bogged down with work, stressed, and depressed. In 2014 I was graduating high school and starting university, so I started making another animation about that process of moving on. I was setting out to make a coming of age story as someone who hadn't really gone through that process yet myself. I wanted the scope to be larger than something I'd make in a month for NATA, too, so it was larger and more ambitious than I was used to. Needless to say, I never finished it. I think I have the source files lying around somewhere so maybe one day I'll come back to it, after revamping the story. But it's unlikely.

I went to the University of Waterloo for software engineering. I've made posts on here before about programming, so it's nothing new that that's something I have a lot of interest in. It took a while to get into the flow of university life, though. Although I like what I do, programming took up all of my time. My university program is a little weird. It's a five-year program where every four months, we alternate between a school term and a term interning at a company. We don't get summers off. In order to get the sorts of work experience I wanted, I had to (and also wanted to) spend a lot of my time improving my programming skills. I think I've come a long way, and I've made a lot of things I'm proud of. My end goal is to bring computing back to the arts at some point, so I've been doing some computer graphics stuff recently. I made a raytracing 3d renderer that can do stuff like focal blur and motion blur, so that was a big learning experience. Recently I also made a thing where you put in a midi file and it plays All Star to the tune of the midi as a project for TerribleHack, a hackathon I helped start in Waterloo where you take a terrible idea and make it. I've worked at Shopify, Athos, and Remind so far, and I'm going to be working at Google in Mountain View this summer. I've been busy, clearly.

It took a while, but I'm at a point again where I tihnk my life is in pretty good shape. I'm feeling better than I have in a long time, I'm doing stuff that I love, and I'm working with great people. Part of having your life in order seems to be getting a bit of free time. I sort of stopped posting it to Newgrounds, but I haven't stopped making music this whole time (maybe I'll start uploading those again.) I also make crappy, sarcastic cover songs with my roommates every term. And, recently, I started making animations again.

I've made small looping animations as jokes with friends, but I hadn't really made anything significant (defined, to me, as "longer than 30 seconds and with sound maybe") until recently. I'm taking a public speaking course to get an English credit, and we had to make a video essay for one assignment, so I ended up making this:

What can I say, I liked the feeling of making things. So I tasked myself with making an actual animation again. I tried to keep the scope small to make sure I'd actually finish, and the animation I released today was the result. Interestingly, my animatino style changed a bit. At first I thought my drawings were shakier, but I've come to realize that that isn't entirely true. In the past, I'd have a separate layer for the heads of characters, where I would drag around the same drawing on top of the frame-by-frame animated body. This way, the shape of the head barely ever changes, and it doesn't look shaky. Now, I don't like the feeling of dividing everything up into a million layers and making everything pristine, so I generally have one layer per character and I redraw everything. So my animation isn't any shakier than before, but since it's all being redrawn, including the head, it visually looks a bit shakier. Which is alright, it's a looser style, and I think it allows for freer acting.

I also learned you can import video into Ableton Live so you can sync up sound to video. If you use Ableton and don't use it to score your movies, I highly recommend it. I feel like I've been doing audio editing in the dark ages up until now.

Anyway, don't expect things from me too frequently, but I don't think I'll be gone for as long this time. Animation is a big part of my identity and I'm glad to be back.

- Dave


NATA Sarcastic Collab Last Call

2015-07-31 07:27:54 by Pahgawk
Updated

Hey guys, turns out this weekend is a long weekend for me, so that means you have until Sunday night to send in entries for the Sarcastic Body Language Collab. 

Rules are here: 

http://pahgawk.newgrounds.com/news/post/937030


NATA Sarcastic Body Language Collab!

2015-07-17 07:40:07 by Pahgawk
Updated

UPDATE: deadline is August 1st. I promise I will get the deadline right the first time next time!

Hey guys, a round of NATA just finished, that means it's time for a new collab!

The theme this time is body language: try to have a character show some emotion without giving them any recognizable facial features! Here's mine, for reference:

2731551_143760601532_scare.gif

The deadline will tentatively be set for the 1st, with the possibility of extension (tell me if you need more time!)

The usual rules:

  • Anyone can join, even if you're not part of NATA!
  • The Flash project is 720x405, 24fps (although if you make yours at any 16:9 ratio, we can scale it)
  • You don't have to use Flash either. As long as it's the right size and frame rate, you can just send me a video file. Don't worry about conversion, I'll handle that (but in case you really want to, I convert to 720x406 flv. It doesn't seem to like having an odd number of pixels in the height.)
  • If you send me a .swf, do not use MovieClips with nested animations or filters as they won't import. Graphics are fine.
  • If you send me a .fla, then MovieClips work.
  • In general, try to avoid filters like blurs and glows. On the menu page with all of the animations, those in particular are expensive and slow things down. If you really need them though, we can export a video and use that without any issue.
  • We don't use sound (the collab has background music and also all of the entries are shown together on the menu.)
  • Aim for a short animation or loop less than 15 seconds (this is more of a guideline than a rule)

Here's our last collab for reference.


Using Swivel in batch from the terminal

2015-06-27 00:23:48 by Pahgawk
Updated

TL;DR turns out you can use Swivel from the command line, but it doesn't let you change the starting frame that way, so I tried to patch it in myself.

You probably know of and have used Newgrounds's swf to video converter, Swivel. It's a really fantastic tool and is really the only acceptable tool for converting Flash videos. So that is why I wanted to use it when presented with the task of downloading all the NATA entries and converting them all to video.

When presented with a task like this, there is a lot of repetitive process involved, so I like to try to automate things. Perl is my tool of choice to do web scraping and automation, and I figured I'd throw in a system call to run Swivel on the downloaded swf files that I scrape from Newgrounds.

The thing is, Swivel doesn't claim to be usable from the command line. It doesn't say so anywhere on the wiki page. But I noticed that if you run it, followed by the name of a file, it will convert it using the default settings:

$ Swivel file.swf

I wondered if maybe Swivel actually did have a command line API, but just an undocumented one. To find out, a used a Flash decompiler and took a look at bin/Swivel.swf from the Swivel installation directory. It turns out that if you want to use Swivel from the command line, you have a few different flags you can set:

$ Swivel file.swf -s 1280x720
Set the output width and height

$ Swivel file.swf -vb 1.5m
$ Swivel file.swf -vb 256k
Set the video bitrate

$ Swivel file.swf -ab 1.5m
$ Swivel file.swf -ab 256k
Set the audio bitrate

$ Swivel file.swf -sm letterbox
$ Swivel file.swf -sm crop
$ Swivel file.swf -sm stretch
Set the video scale mode

$ Swivel file.swf -a none
$ Swivel file.swf -a swf
$ Swivel file.swf -a audio.mp3
Set the audio for the exported video (No audio, audio from the swf, or an external file)

$ Swivel file.swf -o converted.mp4
Set the output file

$ Swivel file.swf -t
Set transparent background

This is all really helpful stuff. However, I also needed to be able to set the start frame. My script goes and downloads videos from Newgrounds with the NATA round tag (like "nata2015open"), and swfs uploaded to Newgrounds all have preloaders. This causes the conversions to hang when being run through the command line. So, I set out on a mission to add the feature in to Swivel.

The trouble is, when you decompile an swf file, you don't get actionscript, you get bytecode.

For example, here's a simple script:

var test = [1, 2, 3]
trace(test.length);

Here it is in AVM2 instructions:

getlocal0         
pushscope         
findproperty      test //nameIndex = 7
pushbyte          1
pushbyte          2
pushbyte          3
newarray          [3]
setproperty       test //nameIndex = 7
pushundefined     
coerce_a          
setlocal1         
findpropstrict    trace //nameIndex = 8
findpropstrict    test //nameIndex = 9
getproperty       test //nameIndex = 9
getproperty       length //nameIndex = 10
callproplex       trace (1) //nameIndex = 8
coerce_a          
setlocal1         
getlocal1         
returnvalue       
returnvoid        

In case you're not familiar with the general concept of how programming languages and assemblers work, here's a brief overview. Your computer doesn't automatically understand ActionScript. In fact, it doesn't really understand any normal programming language, even low level ones like C. Your computer reads Assembly, or basically, a bunch of hexadecimal numbers, where each one corresponds to a specific command. All other languages get turned into Assembly in some way or another, where some languages are separated by more degrees than others (scripting languages like Python or ActionScript sit at a higher level than something like C.)

So, ActionScript doesn't compile directly to Assembly, but it compiles to something called byte code, which essentially serves the same purpose. Rather than read commands directly into the CPU, commands get read into the ActionScript Virtual Machine. The way AVM2 reads in commands is very similar to how Assembly works: Each command name maps directly to a hexadecimal number, which represents a simple command to execute.

There are no if statements, but there are markers in the code and you can use the "jump" keyword to go to different ones depending on if conditions are met. Instead of running functions with inline parameters like addNumbers(1, 2), you push the parameters to the stack (pushbyte 1, pushbyte 2) and then call the function specifying how many things there are in the stack meant as parameters (callproperty addNumbers 2). It's not the most intuitive thing to work in, but it works.

I essentially wanted to add in one line of code to start recording on frame 2:

swivelJob.duration = RecordingDuration.frameRange(2, swivelJob.duration.params[1]);

Here's what that ended up being:

getlocal0         
pushscope         
findpropstrict    swivelJob //nameIndex = 7
getproperty       swivelJob //nameIndex = 7
findpropstrict    RecordingDuration //nameIndex = 9
getproperty       RecordingDuration //nameIndex = 9
pushbyte          2
findpropstrict    swivelJob //nameIndex = 7
getproperty       swivelJob //nameIndex = 7
getproperty       duration //nameIndex = 8
getproperty       params //nameIndex = 11
pushbyte          1
getproperty       null //nameIndex = 12
callproperty      frameRange (2) //nameIndex = 10
dup               
setlocal2         
setproperty       duration //nameIndex = 8
getlocal2         
kill              2
coerce_a          
setlocal1         

Because this is all pretty new to me and lends itself to a very awkward workflow, I basically just added in that code block without any conditionals: in my patched version of Swivel, if you run a conversion from the command line, it always assumes that it should start on the second frame. Ideally, I'd make it so there are extra command line flags you could set to specify the start and end frames (for example, -sf and -ef), but this is all I have done for now.

If you want to try it out, you can download the patched swf here. Rename it to just Swivel.swf and put it in your Swivel installation folder /bin (so, on Windows, that'd end up being C:/Program Files/Swivel/bin). It's a good idea to back up your original Swivel.swf first, but if all else fails, you can redownload the original here.

If enough people would find it useful, I might consider going and actually adding in the feature fully. Or, if any of the Newgrounds staff feel like open-sourcing the ActionScript source, that'd make it really easy to add the feature in, so I'd gladly go and do it! Understandably they might not want their source public though.

So yeah, that's what I've been doing in my spare time for the past few days!

Although the entirety of the NATA scraper isn't done yet, you can follow its progress on GitHub if you're so inclined.


Sarcastic Inanimate Collab

2015-06-18 07:57:12 by Pahgawk
Updated

Update: I forgot that there was less time between rounds this year, so I'm extending the deadline until Sunday the 5th.

 

Hey guys, so in between rounds of the Newgrounds Annual Tournament of Animation, we organize these massive collaborations called the Sarcastic Collabs. Here's one from last year for reference. And with the first round of NATA drawing to a close, it's time for the first one of this year!

The theme is Inanimate Objects.

That means taking things that don't usually move or have any emotions and giving them life. Think Pixar's Luxo Jr or the teapots and kitchenware in Beauty and the Beast. Here's my entry:

2731551_143462769622_couch.gif

There are a few rules:

  • Anyone can join, even if you're not part of NATA!
  • The Flash project is 720x405, 24fps (although if you make yours at any 16:9 ratio, we can scale it)
  • You don't have to use Flash either. As long as it's the right size and frame rate, you can just send me a video file. Don't worry about conversion, I'll handle that (but in case you really want to, I convert to 720x406 flv. It doesn't seem to like having an odd number of pixels in the height.)
  • If you send me a .swf, do not use MovieClips or filters as they won't import. Graphics are fine.
  • If you send me a .fla, then MovieClips work.
  • In general, try to avoid filters like blurs and glows. On the menu page with all of the animations, those in particular are expensive and slow things down. If you really need them though, we can export a video and use that without any issue.
  • We don't use sound (the collab has background music and also all of the entries are shown together on the menu.)
  • Aim for a short animation or loop less than 15 seconds (this is more of a guideline than a rule)

And that's it! PM me your entries via Dumping Grounds link or something and I'll add you to the collab! Tentative deadline is the evening of Sunday the 27th.

Also, let me know if you want to make the preloader art for the collab!


Sarcastic VFX Collab last call, Pahgawks.com retrospective

2014-08-23 12:41:42 by Pahgawk
Updated

Hey guys, quick update! This weekend is the last weekend to send in entries to the Sarcastic VFX Collab! Here's mine, for inspiration:

2731551_140881210131_pahgawk_the-sound-of-music.gif

 

And in other news: After many years, I finally renamed my website from pahgawks.com to davepagurek.com. The reason for this is that the focus has changed: rather than being a showcase for my animations, it's mostly being used now as a portfolio for my programming endeavors to be used for job applications and stuff in university. There was a need for some more professialism and the name is the place to start.

I'd had the old name, pahgawks.com, since the third grade. It's named after an old set of characters I used to make stories about, a group of flightless birds named Pahgawks (pronounced like puh-GOCKS, for those who have wondered how to interpret the weird spelling.) Want to know what that was all about? Well, here's what the website looked like in 2008. It existed earlier than that, but unfortunately (or fortunately?) I don't have any screenshots of earlier versions of the site.

2731551_140881210112_late-2008.png

The eyes followed your mouse. Javascript.internet.com was my friend!

I started making animated GIFs in Jasc Animation Shop at first - that was how I got into animation. Then, because I seemed interested, my dad got me a trial of Flash MX for christmas one year. I don't have records of all my early animations, but here is the earliest one I have on my YouTube account (although, a lot of my really old stuff is unlisted because it's embarassing):

I also didn't know how to properly use a microphone. I thought that if your voice wasn't loud enough, the only way to make it louder was to stick the microphone closer to your mouth. So, I had the microphone literally in my mouth when I recorded audio for a while. I also didn't know how to have more than one sound playing at the same time at first, because when I googled how to add sound in Flash, I found how to import and play a sound from a url using ActionScript. For all animation purposes, this was a super inconvenient way of doing things, and since I didn't know any better or even understand what the code was doing, lip sync and stuff was really problematic and didn't work well for a while.

I joined Newgrounds initially as a way to get more publicity for videos. At the time I wasn't very visible on YouTube (I'm still not), and I only got positive feedback from friends. Like, it didn't seem to bug anyone that the sound quality was so hilariously bad. I thought I was amazing and although putting stuff on Newgrounds gave me more criticism than praise and I was peeved at first, it brought me down to earth and made me improve a lot.

Here's the first video I ever put on Newgrounds:

Here's one from a year later. Lots of improvement, but I was kind of already stretching what I could do with these characters.

All the videos I was making around that time were getting ridiculously long. I guess at least I was trying to make things more serious and cinematic even if I didn't really plan out stories enough. This was the year I stopped using the Pahgawks characters in animations, but I also began to learn how to make ActionScript games, and Pahgawks were useful as sprites for games. I discovered I actually really liked programming and ended up slowing down progress on animation to pursue that a bit. I made an AS2 platformer game engine that I still use to whip up quick games. It was the first coding project that other people found genuinely useful and that I was actually proud of.

Two years later, I started NATA. Nothing improved my skills in animation faster than NATA, but it also wore me out a lot. I'd been having other health-related issues that made continuing on at the same pace really hard. Programming somehow didn't wear me out as much though so I kept going with that. I really liked the community I discovered from NATA though. Before, I was mostly working in my own little bubble, not talking with anyone else who made animations. NATA connected me to loads of cool people to interact with.

Now, I really only animate for the NATA collabs that I organize. Pahgawks are no more, and as I'm going to university in a week for software engineering, programming will take over for animation full time. I'd intended to make one animation over the summer but lots of other stuff got in the way and prevented that from happening. The thing is, it's not really a priority, which is a strange thing to admit. Animation is now just a hobby, and not exactly my passion. I do intend to finish it eventually though. I've worked on it some more in the past week, albeit slowly. I'm thinking I'm more likely to finish it if I allow myself to work at a lower frame rate, sacrificing smoothness for artfulness and completion hopefully. Here's a clip to prove that, yes, there has been SOME progress:

2731551_140881218261_lhj85nj.gif

Renaming my website represents the end of a chapter in my life and the beginning of another. It's a strange feeling, but it'll be neat to see where it goes from here!



So we're making a new Sarcastic Collab! It's the Sarcastic VFX Collab! If you want to join, PM me your entry with the following specs:

For a flash file:

  • 720x405
  • 24fps
  • Don't use any VCAMs or actionscripted effects unless you send me a video file
  • Try to avoid using filters or nested movieclip animation unless you send me an fla instead of an swf (animations in graphics are fine)
  • Break apart any text you use

For a video file:

  • Anything 16:9 is good (I can handle converting to the lower size and compression)
  • 24fps
  • mp4, mov, anything like that is alright (image sequences are ok still but I'm going to try to use video files preferably this time)

Like the previous collabs, your name will appear on the screen automatically so you don't have to add a logo.


Ok, now time for life updates.

So I disappeared from the site for a month and a half to figure out some personal stuff. I think that's mostly dealt with, so I'll be back here and active again. I haven't quite got myself motivated to work on any long term stuff on my own yet, though, so it'll be a while until I release another full animation of my own that isn't a collab. I need to make animation fun again, whichthe collabs are.

In terms of other stuff I've been up to, I went to Mountain View, California for a hackathon. For the uninitiated, a hackathon is where you spend a day or two programming nonstop making a project (hacking as in "hack together something cool", not hacking as in "let's hack into some government databases".) The one I went to was run by Y Combinator, a startup incubator responsible for companies like Dropbox and Shopify. I got a bunch of free t-shirts and small plastic dinosaurs and stuff while I was there from established startups that were there as sponsors.

2731551_140756188591_IMG_20140802_210816.jpg

Above: The aforementioned plastic dinosaur, offering me distraction while on a break from programming in the middle of the night.

Me and two of my friends built a tool to help you decide what programming resources to learn next to help fight the problem where you go to look up something cool to make, and get overwhelmed by how many directions you can go. It's not super great but given we spent around 24 hours of nonstop work on it on not much sleep to begin with, I'm proud of the result.

The guy who won the whole thing built a malaria blood test using two iphones. Once a blood sample is mounted on a slide, one iphone is used as a backlight for the slide, and one takes a photo of the slide. Then, a computer vision/machine learning algorithm figures out which cells are red blood cells and which are malaria cells with a stunning 5% margin of error (and this error is proportional to the resolution of the photo given, so it has the potential to get even better.) There were lots of ideas at the hackathon, but a few of them, like this one, just really blew my mind.

Then after the hackathon I toured around the Google campus a bit (my friend works there for her university co-op term) and I flew back to Ottawa. The flight was horrid and took around a day in total after getting stranded in the Chicago airport after a combination of mechanical failure, bad weather, and bad service from United Airlines (they said they booked me a ticket and then somehow didn't actually do it right so I wasn't on the flight. Damn.)

2731551_140756200552_IMG_20140803_192413.jpg

Above: The dinosaur visiting the Google sign in front of the Android building.

I'm back now and I'm preparing to go to school at the beginning of September. If anyone is in southern Ontario September 19-21 and want to do a hackathon, you should go to Hack the North. I'll be there!

-Dave



Our latest NATA collab is out and can be found here: http://www.newgrounds.com/portal/view/642175 It definitely isn't as polished as it could be and I feel like I'm forgetting something I should have added or something like that, but I just need to get it published right now.

So in the past month, I've been sort of struggling to keep myself from falling apart and in the past week, I had to go out of town for the funeral of a family member. So I'm not exactly in the greatest shape right now. I had intended to make an animation over the summer but it's looking like that won't be feasible. I like to operate in a little bubble and mostly pretend everything is ok and finish everything on time anyway at my own expense, but I think I need to take some time to make sure that I'm alright before continuing with anything. This might mean that since I'm not making an animation this summer, I might not make any large projects at all this year. I guess that's just an unfortunate consequence of the way things are right now.

I'm still going to try to finish the future NATA collabs though, and another will be made for Robot Day. Due to the current circumstances I may need some assistance getting that made though, we'll see.

I'll try to keep up with everything, but my sincere apologies go out to everyone if I don't. Thanks for your understanding.


NATA Sarcastic Creatures Collab

2014-06-18 16:04:20 by Pahgawk
Updated

Hey guys! The NATA Sarcastic Creatures Collab is starting! If you want to make something for that, PM me an animation with a creature of some sort (720x405, 24fps) by next week and I'll add you in!

update: Here's my entry:

2731551_140318424772_pahgawk_stairs.gif

In other news I just had my last ever class of high school. I've got exams for the next week so I'm really busy and haven't actually watched any of the NATA entries yet, and I might have to delay doing so for a week. BUT THEN I'M DOOOOONE WATERLOO HERE I COME