The amazing light-ness of seeing

I’m on a photography kick. So shoot me.


Photography is all about the light – recording photons for posterity.


When you take a photo is sometimes even more important than where you are. Same place, close to the same angle… which is more pleasing?

or ?

I know which one I’d pick.

All that changed was what time of day… one is in the “golden hour” – so-called because it’s when the light is truly golden in color – near sunset. There is a thinner golden segment early in the morning, especially in the spring and fall when the sun is angled – but I hesitate to call it a golden hour… more like twenty minutes or so.
Here are a couple of first-light images…

A matter of focus

Which of these photos is in proper focus:

1. front focus




In this case, focus becomes a matter of preference.

What we’re really dealing with is another issue, Depth of Field (DoF) – that portion of a photograph which is perceived to be acceptably sharp.

Depth of Field is one of those “advanced” topics in photography… until you grasp how it works, and how to control it. (Note to the purists – this discussion is henceforth simplified)

Controlling DoF is all about controlling the three variables which directly affect it –  lens focal length,  lens aperture and camera-to-subject-distance. All three interact to produce differences in DoF. And while your camera may be largely automatic and you may have no control at all over one variable, you’ll almost always have control over one or both of the others.

Lens focal length is usually expressed in millimeters (mm).  Longer is the same as “zoomed in” – short is “wide angle.” Most point-and-shoot cameras have modest zoom lenses of the 3x variety; some cameras have “ultra-zooms” in the 10x to 15x range. Cellphone cameras often have no control at all over this variable. Note it is not necessary to use a zoom lens at least with cameras supporting interchangeable lenses; you can also use “prime” or non-variable lenses… and a prime lens will usually have a bit better control over aperture values. Wide-angle and telephoto are relative values – the size of the sensor (or film frame) determines the useful ranges. Within 35mm photography (and most dSLR systems) 18mm is wide, 200 mm is telephoto, 500 mm is serious telephoto, and 1250mm is wicked close (and the DoF is paper-thin!).

For our purposes, longer focal length (telephoto) produces shallower depth of field and shorter focal length (wide-angle) produces deeper DoF.

Lens aperture is the size of opening of the iris in the lens; i.e. how big a hole the light comes through. This is expressed as a logarithmic ratio in the form of f-stops. F-stops for camera lenses run from f/1.4 at the wide-open (think really huge) end to tiny pinpricks of light up around f/64. As you go up the scale, the amount of light is halved; thus if we take f/1.4 as “full” open, then f/2 (next increment up) allows 1/2 the light, f/2.8 is 1/4, f/4 is 1/8 and so on. Photographers usually reference low f-stop values as “open” (or “fast” – as it gathers the light in faster), and high f-stop numbers as “closed” (or “slow”).  Zoom lenses are limited as to how open they get; f/2.8 is a very fast zoom (wide open) and f/8 is rather slow (closed down).

For our limited interest in Depth of Field, the rule is this: for a given focal length, depth of field is reduced as you open the lens, increased as you close the lens.

Camera-to-subject-distance is self-explanatory… isn’t it? In the photos above, camera-to-subject-distance is the variable which has changed – I shifted the subject from the up-close gun barrel to the more distant aircraft. Focal length stayed constant at 50mm and aperture at f/10. If I’d backed off a bit (probably about 10 feet would have done it) both subjects would have been in focus… but that wasn’t the effect I wanted!

Let’s look at these two photos, shot back-to-back. The closeup is at f/5.8 and 300mm focal length; the wide-angle view at f/4.5 and 75mm.



The increased focal length more than makes up for the slight closing of the lens; note how blurred the background is compared to the second photo. They were taken back-to-back, about ten seconds apart.

A milestone reached…

Post #25, which to Automattic, means they shall now unleash the automated hounds-o-advertising and try to convince me to “upgrade to pro.”

Not this week. Sorry.

And now for something completely different… over the next several posts, the focus of the blog will change. (Focus? He’s got focus? Yep – we’ll prove that).

Thus endeth the short post #25.

Keeping up the pace…

Starting tonight the assignment for the 232 crowd (web architecture) is to build a blog on a hosted platform, and update it three times a week.

If I assign it, I should be able to do it.

Famous last words, but perhaps not.

The in-between-class question today has centered around hosting providers – which will be the subject of a homework assignment, I think… but not just yet. First we have to cross this bridge – getting the first “real” content up (as opposed to “un”-real, which is how I classify Google-Sites content).

Initially, student blogs will be linked to the class webpage inside the  college portal. Upon approval by students, selected blogs may be featured as links from this blog… but again, only with explicit approval of the affected student(s).

Almost time for class…

Hacking HTTP via GET; part the second. (finally!)

When I left off (Hacking HTTP via GET; part the first) with this subject, I demonstrated the basics of “hacking” via modifying parameters on a GET method.

But what of methods? And why GET? and what else is there?

A method is the subroutine (or function or procedure or whichever semantic construct you prefer) which is bound to an object (or class); and is executed (or performed) whenever an instance (copy) of an object is encountered. Or so sayeth the oracle of the Wikipedia.

In the case of the web, wherein we are in a stateless protocol (that is, there is no implicit memory of what came before), the protocol itself defines a group of “methods” – or actions to be taken.

The currently-defined (HTTP 1.1; RFC 2616) methods are: GET, HEAD, POST, PUT, DELETE, TRACE and CONNECT. For our purposes, the methods of particular interest are GET and POST.

Why GET? Because it’s the basic, easiest-to-comprehend (and generally program) method when data needs passing from the client to the server. When you go to a web page such as the homepage of this blog, your browser sends a GET request with a parameter of “/” (root, or whatever is aliased to root).

GET has an attribute(/feature/flaw) of displaying all the parameters requested as part of the URI.

It’s this behavior that makes it possible to “hack” via the GET – the parameters are exposed, and thus changeable before the request is sent. It’s also this behavior which makes GET the most popular way to send parameters – it’s much easier to debug! And there is a deeper more technical reason as well; buffering on the server side is handled by the web-server software, not the applications program.

POST is the other method for sending data; the authors of HTTP 1.1 thought most forms would be handled via POST requests. POST hides the data being sent – and is capable of handling much larger objects than is GET. But it is significantly more trouble to program for a POST method, and debugging is a bit more “interesting” as well.

Of the other methods, HEAD is widely used – it requests and receives header and meta-information about a resource, and is often issued by browsers simply to check if the server version is newer than the locally-cached version.

PUT and DELETE are the precursor methods to WebDAV (web-basd distributed authoring and versioning) but are rarely encountered; TRACE is a debugging method and CONNECT deals with proxy tunneling.

Back to the grindstone…

…making new webmasters and developers and support engineers.

Yep – it’s a shiny new semester.

It’s about now that I begin to envy the established bloggers – the ones who find it easy to write hundreds, or thousands of words a day… but enough carping, back to work.

Today I received a welcome piece of news – the server I donated to the college has been placed in a datacenter rack and is operating. We’ll see how well that works out, but if all is well, then we will have a Linux system accessible throughout the college network – but isolated from the “real” world. It will also have more compute and storage resources than the antique RS-6000 publicly available… and allow for server-side programming. In time we may mount a BSD VM – for shell work only, to demonstrate the differences between System V and BSD styles… but first we need to get the base system verified.

I rebuilt my apple* server this week; it is up and running but without any significant content as yet. Also the homebrew barebones ESXi server is running quite nicely, and in use as a staging and testbed server for cloud operations.

User interface designers and system architects should read the book  Traffic: Why We Drive the Way We Do (and What It Says About Us) by Tom Vanderbilt (link to Amazon) – especially chapter three – and apply the knowledge to your designs. You might also learn some useful driving tricks. This is the best thing related to computing I’ve read this year. So far.

With the new semester under way, the intent is to update the blog at least weekly, perhaps more often than that.

[note: I do not have an Apple-branded machine in working condition. But I do have a server named for a fruit.]