Copyright 2009 Mike Strong

The Movie Look Thing:

Unless otherwise noted all content, images and text were created by and are the copyright 2003-2016 of Mike Strong - No copying or duplication is allowed without express permission from Mike Strong - kcdance.com, artfuldancer.com

 

   
Creating 30 fps video from 24 fps film - two methods:
3:2 pulldown and 6:4 pulldown

 

Why shoot video at 24-frames-per-second instead of either 25 or 30 fps? Only one real reason, to make video-to-film transfer easier. That reason is going out of date as the quality of video improves, rivaling film, and allowing more options during both shooting and editing. This is partly because video is so much cheaper at the front end than film. One should note, however, film is actually cheaper to archive and store than video. That is because not only do hard drives, with moving parts, have to be stored but they have to be run once in a while to make sure they still work, and from time to time, as video formats change, the whole set of files need to be either transfered to new drives and-or copied to another video format for future storage.

In the "Phantom Menace" Star Wars feature Lucas used a Sony-Panavision camera to shoot a number of scenes. Those people who know which scenes are saying that you can't see video artifacts. The video scenes are the very nice scenes. They were so nice that the next two Star Wars were shot in HDTV 1080/24p format completely. No film.

The Main Components for "The Look" of Film

 

Film vs Video

Almost all movies will be digital in the near future. Within a few years film will not be manufactured. Everything will be digital. That is not merely because we have new technology. It is because the new technology is less expensive, more convenient and is fast catching up to the image resolution of film.

  1. Film is very expensive
    • initial film stocks cost a lot and must be matched across the shoot
    • lab costs a lot to process the film negatives and then print the takes
    • the edited footage needs to be printed to a master
    • The master is then used to print film reels for distribution
  2. Digital costs far less to shoot - offering more options (redos) within a budget
  3. Digital costs far less to distribute -
    • digital video can be transferred digitally via internet or satellite
    • film requires physical transport of film reels
  4. Digital costs far more to preserve than film
    • Storage of film masters runs about $1,000 a year
    • Storage of video masters runs about $12,000 a year
    • Storage of all the video extras can bring that up to $100,000 to $200,000 per year
    • Video needs to be copied onto new disks and possibly new formats after a period of time
  5. Digital doesn't last as long as film - modern "safety" film that is
    • Digital formats (especially professional ones) can become unreadable with newer software
    • Hard drives can lose data and which needs to be transferred to new hard drives from time to time
  6. Film grain has a tonal and detail rendition far exceeding any digital resolving power
    • The physical size of film grain is molecular
    • The physical location of film grain is totally random and varying in size
    • The physical size of video pixels is microscopic but not molecular
    • The physical location of video pixels is regular

Getting The Film Look

As far as I am concerned, film is dead, long live video. That said. A lot of people love the "look" of film and try very hard to get video to look like film. There are two main sources for this "look."

1 - The main reason that film looks like film is the physical medium of a photographic emulsion attached to a plastic base and developed in chemistry. When developed the detail in film is held by a grain structure which is random, very microscopic, and very smoothly graduated in tonal scale. That is something sensor chips cannot match easily, if at all.

2 - The other reasons have to do not with video vs film characteristics but with the general appearance of the image as seen in Hollywood films. Those professional films feature (mostly) shallow depth of field which is not about film so much as it is about sensor size or film size in combination with the lens focal length and f-stop. In this case the "film look" is really a Hollywood look by virtue of using 35mm to 70mm or larger film sizes. A large sensor size in video also gives the same shallow depth of field.

Digital cine (video) cameras (see list below) use sensors the having the same width as film gate apertures. A video camera with a sensor the same size as film's Super 35mm yields the same field of view and depth of field as that of Super 35 mm film motion picture cameras when used with the same lenses used on Super 35 mm film cameras.

Aside from the look of film in terms of grain and tonal gradation, video has all the other advantages. It is the more democratic medium, being available at low cost with very capable software for editing at low cost with professional output. Shutter speed is easily adjusted especially with direct storage to memory card or hard drive. Further, image detail and tonal gradation are rapidly improving in video, so that some video cameras have already been used to record Hollywood films (such as Apocalypto). Video film generations have been used for the last two and a half decades with every increasing sophistication with computer-generated animated images (Anything in space movies, dinosaur movies and more).

Advantages of Digital Cinema (Video for Cinema) over Film Shooting and Production

The Red One video cameras generated a lot of buzz in the last few years. The first feature film shot and completed on the Red One 4K was Red Canvas. Director Steven Soderbergh shot two features, The Argentine and Guerrilla entirely with the Red One camera. Soderbergh said "this is the camera I've been waiting for my whole career: jaw-dropping imagery recorded onboard a camera light enough to hold with one hand ... I know this: Red is going to change everything."

These video cameras (digital cinema cameras) are already in wide use by the film industry .

Digital Cinema Cameras In development

But people are still trying to emulate film. One of the reasons is that film festivals used to require film as if it wasn't a real movie if not shot on film. It used to be a sort of Holy Grail to get a video camera that shot at 24 fps so that once the video was edited it could be printed to film and projected that way. Cheaper shooting with cinema distribution on film. Film as superior, even when printed from video, has been an article of faith and desire for some years. Printing video to film makes little sense to me.

  1. Video is very in-expensive while printing even one movie to film can costs thousands.
  2. Video can be transmitted via the internet or satellite rather than shipped on reels in large film cans.
  3. Film is subject to dust and handling damage right from the start. A clean print doesn't last long.
  4. No matter how much detail and tonal gradation film delivers when exposed directly, when video is printed to film, the film cannot deliver more detail or any better tonal range than the original video had from the start
    • Fine point - in the video to film transfer, software is used to interpolate detail and tonality between scan lines and between pixel locations to create a smoother, though not more detailed film print.
  5. So ... why bother. Use the natural medium for video, a video projector (it looks great)
  6. Beside, film festivals have been accepting video for some years now with only a few holdouts left. Who wants them? It means only the well-heeled can submit movies to film-only festivals

In the past there were art festivals which wouldn't allow photographs because photographs were not art, only paintings were art, or sculpture, etc. There are still some art festivals which won't allow photographic art though usually they give some other reason. Film-only film festivals deserve to die a long, lingering death as videographers put up their vibrant and democratic tents on the other side of town (so to speak).

Frame Rate

Sound Film is recorded at 24-frames-per-second. The standard for silent film was usually 16 or 18-frames-per-second. Film cameras are capable of a wide variety of frames-per-second settings. They are projected at 24 frames per second. Any film shot at more frames per second play back as slow motion. NFL Films for example uses only film cameras and shoots most of their highlight footage at 32 frames per second for a slight slow motion effect.

24 vs 25 vs 30 vs Anything for Shooting and ...
24 frames shown at 48 and 72 flashes-per-second

Video is recorded at 25-frames-per-second for PAL (Europe, etc.) and 30-frames-per-second for NTSC (USA and North America). Sound film is recorded at 24 frames per second.

Shooting video at 24 fps will NOT make video look like film but you will find a lot of people who are sold on 24 fps as the film look. If the frames-per-second rate was the key to the way film looks, you could reverse the direction and make film look like video by shooting film at either 25 fps or 30 fps. But you don't get PAL by shooting film at 25 fps and you don't get NTSC by shooting film at 30 fps. Film still looks like film.

At 24 fps left and right pans take much longer if you are not to cause so much blur that the picture becomes unusable. Cinema handbooks often specify how much time in seconds or minutes and seconds to pan across so many degrees of arc with various lens.

Deciding on 24 fps as a frame rate for sound film was a technical vs monetary compromise as well as a technical merging of two frame rates into a "flicker fusion" rate. Shoot film at a frame rate slower than 24 fps and the sound goes yucky while the picture gets a bit jumpy. Shoot faster than 24 fps and the film costs skyrocket.

Beyond that, we need to see roughly 50 frames per second, plus or minus a few frames, in order for our brains to have a sampling rate from our eyes so that we see these successive still frames with the experience of seeing motion. Once the film is shot and developed it needs to be projected. On the projection end we need to have the 50 plus or minus frames a second. Silent film, shot at 15 frames per second, acheived the near 50 number by projecting each frame three times for a total of 48 picture flashes per second.

You can get the same 48 projected frames per second if you shoot film at 24 frames per second and show each frame twice (instead of three times).

So why didn't television, which came along later than film, just borrow the 24 fps frame rate from the start? Because the early television engineers decided to use the cycles per second of AC power lines as a clock to trigger television frames. In Europe they have 220 volt at 50 hertz (cycles per second) and in the USA we have 120 volt at 60 hertz. Each frame has two "fields" or overlapping scan lines (interlacing) so that we wind up with either 50 fields per second or 60 fields per second. The division of each frame into two fields was also a result of how fast phospors faded after being activated to fluoresce.

Because of the speed limits on electronics, the fade rate for phosphors and the need for about 50 or more flics per second on screen, early television needed to interleave two scans, each from top to bottom, rather than progressively scan (once) every line from the top to the bottom of the screen.

Each of the two scans per frame is called a field. The first field takes every other scan line location from top to bottom. The second field in the frame covers top to bottom all the scan line locations skipped in the first field. Together they make a full, interleaved, frame.

Generally the difference in time between the first field and the second field in a frame is not noticed. Horizontal movement will show up most in the comb-like appearance of vertical lines. It becomes most apparent in a still capture (as in the photo at left, a capture from video, cropped).

The first field scan picks up half the image. As this occurs and the dancers (in the picture to the left) move left or right, the second field scan picks up the same image with a small shift in position from the first field.

Projection Frame Rates

Film projectors show each frame either two or three times before moving on to the next frame in line. So film shot at 24 fps is shown at a projection rate of 48 flashes of light per second or 72 flashes on the screen per second. Film projectors have a rotating shutter with either two or three blades.

The projected image from each frame is shown two times with a two bladed shutter or three times with a three bladed shutter. After the last display the film is pulled down to the next frame and the next image in line is then shown two or three times. The pull down occurs 24 times per second.

Early motion experiments showed that two conditions need to be present for us to see apparently continuous motion
1) a minimum of about 10 frames (separate images) per second and
2) around 50 separate flashes of image to smooth the motion.

During much of the silent film era projectors showed film at 16 frames per second with a three bladed shutter for about 48 flashes per second. When sound film arrived film speed had increased to 24 fps and two-bladed shutters in projectors could show each frame twice, producing 48 flashes per second. Our eyes "sample" those flashes at about 50 or more times a second.

 

Depth of Field, "Selective Focus" and Sensor Size


The hyperfocal point is the specific point at which the camera is focused. 1/3 of the DOF will be in front of the subject and 2/3 will be behind the subject.

35mm film, vs 1/3-inch chips

Depth of field is the area in front of the camera in which picture detail is acceptably sharp. Selective focus is taking advantage of depth of field to control what you want in focus and what you do not want in focus. Although out-of-focus areas are blurry this is not the same as the other most common blur, motion blur.

The ability to control what is in focus and what is out of focus is one of those pictorial controls enjoyed most by formats where the sensor (or film size) are larger. Most video cameras have very small size sensors and as a result video tends to have a larger area in focus, a greater "depth of field" just because of the small size of the sensor. This is hard to limit to just a slim area of focus which is so desired by so many. That is fine for certain uses, for short time sequences with carefully controlled subject motion. Also, Hollywood crews include someone called a "focus puller." This is a human on the camera crew who is not the cinematographer but who is responsible for one thing only, maintaining an exact focus on the actors. They also shoot one take after another until they get the thing right.

Well, most other shooters don't have such luxuries of focus pullers and take after take to get it right. If anything, most shooters of concerts are at the tail end of the food chain and have to put up with crappy for cameras lighting and only one chance to get all of it looking right. The last thing I can use is a lot of out-of-focus dance which is what I would have with small depth of field. I would have to scrap whole concerts if I insisted on such a small depth of field.

But if you have the budget, and the production is being made for the camera and you get to do the multiple takes where you put together the best footage, go for it. It will look nice. Just don't expect to get that freedom in most circumstances.

There are several factors which control depth of field:

Factors Controlling depth of field
  More Less
  Smaller aperture (f-Stop number gets larger) Larger aperture (f-stop number gets smaller)
Distance between camera and subject increases Distance between camera and subject gets closer
Sensor (film) size gets smaller Sensor (film) size gets larger
Lens' angle of acceptance increases (zoom out more to the wide angle setting) Lens' angle of acceptance narrows (zoom in more to the telephoto setting)
   

Movie film 35 mm format is not like the format in a regular 35 mm camera. The size of a still photo frame in a 35 mm camera is 24 mm x 36 mm. This is called full-frame. A half-frame camera's photo frame size is 18 mm x 24 mm and it is oriented at 90-degrees to the full frame cameras.

A 35 mm movie film camera shoots a frame size which is the same as half-frame. The closest size equivalent in today's DSLR cameras is the APS-C ( 23.6 mm x 15.7 mm) size which is very popular now. This means that an APC-C sensor has the same depth of field as a 35mm movie (film) camera for equivalent lenses, f-stops and distance.

These original dimensions for the width of the film haven't changed since 1889 although the image size and shape on the film has gone through a number of variations from a 4:3 ratio (the very first) to at 1:2 or more ratio or a 4:3 ratio shot with anamorphic lenses and expanded to 1:2 or wider.

By 1887 the Edison company was working on the "Motion Picture" project. About this time Eastman Kodak was developing a flexible film (rather than glass plates). W.K.L. Dickson, working for Edison, first got hold of 50-foot rolls of the new flexible film from Eastman in the spring of 1889. It was smooth film and the Edison researchers cut sprocket holes in the film. They first made the picture width across the film at 1.2 inch, then 3/4 and finally 1-inch by 3/4 inch high (the 4:3 ratio of most film before themid 50's and of most television until recently).

The actual width of the film was 1-3/8-inches. That is 34.8 millimeters, or, as we nominally call it, 35 mm. Even though we call the film 35 mm it is exactly 1-3/8-inches. Has been since 1889 and is still.

In the 1950s, to compete with television, which Hollywood film makers saw as a threat and which also inherited the 4:3 film ratio, Hollywood filmmakers, (and others) introduced wide screen formats. They did it three ways:
1) still using the 4:3 ratio in the film gate, they used an "anamorphic" lens to squeeze the picture into the smaller film area, then expanded the picture on projection with and anamorphic projection lens
2) they changed the ratio on the film by keeping the image width at 1-inch but reducing the height from 3/4-inch to various smaller heights.
3) they also rotated the image horizontally on the film (90-degrees).

A fourth method used wider film, 70-mm film.

So, the widescreen formats have a history in film and in competitive fears over television. And now, with HiDef televisions, we have a modified widescreen format of 16:9. This is not quite 2-to-one but close.

     
     

 

   
All the shots below were shot with my Leica M2 and either my 35mm F/2 Summicron (wide angle) lens or my 50 mm Elmarit using Ilford FP4 film.
In the still shot at above the woman in front is sharply in focus. You can see facial detail and detail in her macramé sleeve. Everything behind is out of focus. The depth of field is all very close to the camera. (Auburn, New York, 1973) Above, the man in front is blurred while the marchers behind are in sharp focus. (March by Bangladeshis to free Mujid, Fall 1971, London, England)
   
Here the man in the middle is in focus and everyone in front and in back is either out of focus or very soft. The depth of field is from just in front of the man's cigarette to a little past his back shoulder. The hyperfocal point was probably on his eyes. (March by Bangladeshis to free Mujid, Fall 1971, London, England) In this shot everyone is in focus although the background is going soft. The combination of focusing on the persons in front and the distance from the camera create a larger depth of field. (March by Bangladeshis to free Mujid, Fall 1971, London, England)
   
35mm Depth-of-Field Adapters

There are several "depth-of-field" adaptors which attach to the front of a video camera to emulate the effect of a larger image-size format. They all use lenses for 35mm still cameras focusing an image on some sort of ground glass and then making a video of that image. Most are expensive, as in several thousand dollars. Mostly, their purpose is to reduce depth of field. Most degrade the actual image and all lose light.

The unit shown here is one of the very good ones with an aerial image rather than a ground-glass image. Even so, it will take about $2,700 to buy a full outfit with the DOF adapter, the rails and rail-attachment stand as well as more for the still-camera lens. At a price like that you could buy another nice "prosumer" camera or a couple of DSLRs with video capability.

This type of device fills a temporary need until the manufacturers themselves make the cameras to fill this market.

These things will stop selling as more video cameras with larger sensors and DSLRs with their already larger sensors take over this makeshift product area.

DSLRs with Video recording

  SLR = Single-Lens Reflex (film)
DSLR = Digital Single-Lens Reflex

That can be avoided by going directly to any of the new DSLRs (still camera) which include video recording. This way, for far less that the 35mm adaptors, you can get an image sensor with built-in 35mm depth of field characteristics. A DX size sensor is the right size to show the same focus look as a 35mm film movie camera. It isn't quite as handy as a video camera but including video in DSLRs will be standard from now on.

The main disadvantages so far are that options such as focus and exposure control are limited, especially now. Future designs may change that. One of the most problematic for event shooters is the limit on recording time. 5 minutes and 20 minutes are the usual limits and most of the time it means that the camera stops recording after that many minutes. To continue you need to restart the video recording function. If you are shooting concerts where individual sections are longer than this, this is clearly not workable. No matter how fast your hands are, you will have gaps in the coverage as you rush to re-start the camera.

Video cameras with single large sensors.


A mere $20,000 will get you a Sony PMW-F3K video camera with a Super-35mm sensor.

Color Depth

The term Color Depth refers to how many divisions of tonal change are being recorded for each color and how much of that is retained when this is recorded to tape or memory card. The more steps which are recorded the smoother and deeper the colors. Below is a set of grayscale bars illustrating color depth (even though this is grayscale, it is still called color depth).

In a digital file a color doesn't really "look" like a color, the way film would retain it, for example. Each digital color is simply labeled (tagged) with the name of a color and with how much of that color. When this is displayed, the correct color pixels are illuminated to the amount of color for that pixel.

Most digital photo and video files you will encounter lose data when stored by being compressed to a smaller file size. This kind of compression is called "lossy" because you lose data. This usually isn't all that noticeable and is required in order to store data which would otherwise come too much and too fast, faster than the ability to write it to disk or memory card. Some data is lost and is reconstructed when the file is opened again. It is usually close enough to the original data that we don't really notice the missing information. Even so, it is a loss of color depth. The only way to avoid the loss is to be able to store more and more data in real time. That means using more bits in the file per color.

The degree to which the file is able to show small changes in the amount of the color is the color depth, literally the number of bits used to describe it. The more bits, the more subtle the changes which are recorded. Also the larger the file to store it in and in the case of video, the faster the memory has to be to accomodate the data stream. Purely consumer cameras often use low data rates, such as 6 megabits of data per second. Prosumer cameras tend to run between 16 and 25 megabits of data per second. Commercial cameras can easily run above 75 megabits per second of data being stored. The more data being stored the more detail from the scene is being stored in un-compressed forms (lossless).