Tuesday 22 January 2019

A few pointers about undertaking first steps towards 'meaningful' research.

Wayne C. Booth, Gregory G. Colomb, and Joseph M. Williams explain how to find and evaluate sources, anticipate and respond to reader reservations, and integrate these pieces into an argument that stands up to reader critique.
By University of Chicago Press
The Craft of Research is intended to serve any person aspiring to be an effective and sound researcher. The authors have meticulously structured their work to provide fundamental insights on how to approach any research project - be it ones first research based assignment at undergraduate level or business and government related research report. 
The language is simple and the logic/rationale of steps lucid. The approach laid out by the authors seeks to help identify the
significance of ones research question. They suggest a simple three-step formula:
  1. Topic: I am studying _________
  2. Question: because I want to >nd out what / why / how ________,
  3. Significance: in order to help my reader understand _________.
As the researcher/reader progresses from step 1 to 2, she/he graduates from just collecting data to a researcher interested in 'understanding something better.' As one moves from step 2 to 3, the one focus on 'why that understanding is significant' (3 ed p. 51)

Thursday 4 August 2016

Creativity and the Rule of Thirds

This article is based on "Creativity and the Rule of Thirds" by Jim Altengarten. Jim Altengarten is the owner of exposure36 Photography that specializes in landscape photography, creative vision, and photographic education. Jim teaches classes every quarter at the Experimental College of the University of Washington. Topics include Basic and Intermediate Photography, Composition, Exposure, Macro Equipment, and the Canon EOS Camera System. He also teaches workshops at prime locations in the western United States--such as Death Valley, Yosemite, The Grand Tetons, and The Palouse wheat fields. Please check the exposure36 Photography website for information about classes and workshops < http://www.exposure36.com/>. He entertains e-mails at info@exposure36.com.

I firmly believe that for any form of visual communication to be effective, the communicator should be able to produce images that have both strength and clarity. The viewer is most likely to be bored by an image if either is lacking. But before any communicator can think of conjuring up visuals, he/she must have something to say, but before anything can be said - one has to be aware of the world around us. All good communicators have a view of the world, this perceptive and perception is extremely important both for the communicator and the viewer, who negotiates the meaning of the text based on his cultural bearings.

In this regard I would like to refer to Aldous Huxley’s Visual Process. To process any visual (be it at the end of communicator or be it at the end of the viewer/consumer) mentally on a higher level of cognition than simply sensing and selecting means that one must concentrate on the subjects within a field of view with the intent of finding meaning and not simply as an act of observation. Although one may be able to identify a particular visual element with mental processing when it is unique, new, or surprising occurrence, analyzing a visual message ensures that one will find meaning for the picture. If the image comes to hold a meaning for the viewer (remember that even the communicator or with whom the idea originates is responding to a visual stimuli), it is likely to become a part of one’s long term memory. In the words of American philosopher, Henry David Thoreau, “the question is not what you look at, but what you see.” The more you know, the more you sense. The more you sense, the more you select. The more you select, the more you perceive. The more you perceive, the more you remember. The more you remember, the more you learn. The more you learn, the more you know.

This article examines the use of the Rule of Thirds to improve strength and clarity mentioned above, as well as some additional ways to utilize the concept to allow more creativity in your images. Remember, composition exists in a context. That context is the frame, which is itself an element of picture composition. The idea here is to help you identify your subject, emphasis of the subject, and lead the viewer’s eye to pictorial or visual elements of your choice in an aesthetically pleasing way.

Design Principles: The Principles are concepts used to organize or arrange the structural elements of design. Again, the way in which these principles are applied affects the expressive content, or the message of the work. Further, the use of design principles applied to the visual elements is like visual grammar. When children learn art, it is like learning to read and write the language of vision. When they develop a style of expressing visual ideas, it helps them become visual poets. Looking for the visual effects of design principles does not have to limit an artist's options. It can focus an artist's experimentation and choice making.

Strength refers to the ability of a visual to attract the viewer's attention. Here it is pertinent to always remember that the average person viewing images has an attention span approximating that of a three-year-old child. If one is not able to grab attention immediately, the image will fade into oblivion. This strength may come from ‘Emphasis’ – or "Centre of Interest." It is about dominance and influence. Most communicators put it a bit off centre and balance it with some minor themes to maintain our interest. Some artists avoid emphasis on purpose. They want all parts of the work to be equally interesting. Harmony is another element that provides strength - as pleasing visual combinations are harmonious. Another way to achieve strength is by ‘Opposition’ - on contrasting visual concepts. Rajasthan’s desert "big sky" landscape becomes very dramatic and expressive as monsoon clouds builds. Thus, in short, more often than not a viewer will abandon the image before examining the various parts and subtleties if the image lacks strength. Both strength and clarity must be present.

Clarity refers to the ability of the image to maintain the viewer's interest. This stems from allowing the viewer to explore the parts and subtleties of the image. One must provide a mechanism for the viewer's eye to use to examine all parts of the scene and return to the main focus. This clarity is possible if Unity exists – that is nothing distracts from the whole. Unity without variation is often uninteresting - like driving on a clear day through the Sahara Desert. Just remember, ‘Unity’ with diversity generally has more to offer in both art and in life. Of course some very minimal art can be very calming and at times even very evocative. Also, a simple landscape may have a powerful mesmerising effect. In composition, there are several principles and elements available to enhance strength and clarity. While composing the image, the photographer has to aware all pictorial elements, and then chooses the ones that appeal to him/her most in just the right proportion to create a visual motif.

Finally, please do understand that it is ‘Simplicity’ which is the key to most good pictures and visuals of any form. The simpler and more direct a picture, the clearer and stronger is the resulting statement. There are several things to be considered when we discuss simplicity. First, select a subject that lends itself to a simple arrangement; for example, instead of photographing an entire area that would confuse the viewer, frame in on some important element within the area. Second, select different viewpoints or camera angles. Move around the scene or object being photographed. View the scene through the camera viewfinder. Look at the foreground and background. Try high and low angles as well as normal eye-level viewpoints. Evaluate each view and angle. Only after considering all possibilities should you take the picture. See beyond and in front of your subject. Be sure there is nothing in the background to distract the viewer's attention from the main point of the picture. Likewise, check to see there is nothing objectionable in the foreground to block the entrance of the human eye into the picture.

A last point of simplicity-tell only one story. Ensure there is only enough material in the picture to convey one single idea. Although, it is hard to compose any picture without numerous small parts and contributing elements, but it should never attract more of the viewer's attention than the primary object of the picture. The primary object is the reason the picture is being made in the first place; therefore, all other elements should merely support and emphasize the main object. Do not allow the scene to be cluttered with confusing elements and lines that detract from the primary point of the picture. Select a viewpoint that eliminates distractions so the principal subject is readily recognized. When numerous lines or shapes are competing for interest with the subject, it is difficult to recognize the primary object or determine why the picture was made. For example, as in the picture above-left of a coyote, be sure that only the things you want the viewer to see appear in the picture. If there are numerous objects cluttering up the background, your message will be lost. If you can’t find an angle or framing to isolate your subject, consider using depth of field control to keep the background out of focus.

Moving on, a photographer must determine which design principles are important for creating the image. Some of the prominent design elements are:

1. Balance: It may be symmetric or asymmetric, subtle or obvious. Balance, in other words, is the consideration of visual weight and importance. It is a way to compare the right and left side of a composition.


o In symmetric balance both sides are similar in visual weight and almost mirrored. As symmetrical balance often looks more stiff and formal, sometimes it is called formal balance. Of course a butterfly, even though it is symmetrical, doesn't look stiff and formal because we think of fluttering butterflies as metaphors for freedom and spontaneity. It is a case of subject matter and symbolism overpowering formal design effects.





o Asymmetrical balance is more interesting. Above both sides are similar in visual weight but not mirrored. It is more casual, dynamic, and relaxed feeling so it is often called informal balance.





o Radial balance is not very common in artist's compositions, but it is like a daisy or sunflower with everything arranged around a centre. Rose windows of cathedrals use this design system. Of course a sunflower can have many meanings and feelings beyond its "radiant" feeling. Farmers might hate it as weed cutting into their corn production. On the other hand, many of us can't help thinking about Vincent Van Gogh's extraordinarily textured painted sunflowers. Once we have contemplated those thickly expressed colours and textures with their luscious painterly surface, every sunflower we see becomes an aesthetic experience filled with spiritual sensations.








2. Dominant element: Usually there is one main subject to the image. The subject may be either a single object, or a relationship. The principle of dominance makes an aspect of the design the focal point or emphasis. In The Magdalen (see image on the left) with the Smoking Flame by Georges de La Tour, the dominant element is Mary Magdalen’s gaze and meditation due to the use of light radiating from the candle, leaving in darkness the rest of the scene and the nature of the subject, since in a portrait the sitter is usually the dominant element.





3. Eye flow: Elements in the scene that guide the viewer's eye through the entire frame. There are two basic concepts that photographers use when composing their photographs: the first is the Rule of Thirds and the second is “eye flow” which is more difficult to understand because there is no basic starting point. Designers and photographers plan every element in fashion photography to make sure you see what they want you to see. The first question to always ask is: “What am I taking a picture of?” The second question is: “How will the observer view the image?” Contrast contributes to eye flow. Contrast is the ratio between the highlight and shadow areas of an image (see image on the right-above). This is another multiple subject image. Male or female, your eye is drawn to the model's legs and the eye tends to travel upwards to the bright white and horizontal piano keys. The contrast between the piano keys and the deep shadows then forces your eye downward to see the surprise subject hiding under the piano. When taking any photograph, even a candid, always ask yourself the two key questions and you will consider eye flow in creating your composition. When you are planning your composition, you can use strong horizontal or vertical lines to make the eye flow to the subject. Using strong colours is another way to guide the viewer through the image. Our eye travels first to lighter and brighter colours and no matter how hard we try to see the burgundy colour first, our eye is sucked in to the yellow colour. Designers use colour and lighting to make sure the item they are selling stand outs. Nothing aggravates a designer more than when we appreciate the model more than the lipstick!



4. Simplicity: Only what is essential to the scene is included in the final image.








The other important aspect is the application of design elements to create clarity in such familiar applications such as:

o Lines: Lines can be effective elements of composition, because they give structure to your photographs. Lines can unify composition by directing the viewer's eyes and attention to the main point of the picture or lead the eyes from one part of the picture to another. They can lead the eyes to infinity, divide the picture, and create patterns. Through linear perspective, lines can lend a sense of depth to a photograph. (Linear perspective causes receding parallel lines to appear to converge in the picture. This allows you to create an illusion of depth in your pictures.) The viewer's eyes tend to follow lines into the picture (or out of the picture) regardless of whether they are simple linear elements such as fences, roads, and a row of phone poles, or more complex line elements, such as curves, shapes, tones, and colours. Lines that lead the eye or direct attention are referred to as leading lines. A good leading line is one that starts near the bottom corner of the scene and continues unbroken until it reaches the point of interest (see picture on the right). It should end at this point; otherwise, attention is carried beyond the primary subject of the photograph. The apparent direction of lines can often be changed by simply changing viewpoint or camera angle.

o Shapes: Shape is a two-dimensional element basic to picture composition and is usually the first means by which a viewer identifies an object within the picture. Form is the three-dimensional equivalent of shape. Even though shape is only two-dimensional, with the proper application of lighting and tonal range, you can bring out form and give your subjects a three-dimensional quality. Lighting can also subdue or even destroy form by causing dark shadows that may cause several shapes to merge into one. Shapes can be made more dominant by placing them against plain contrasting backgrounds; for example, consider again the white sail against the dark water background. The greatest emphasis of shape is achieved when the shape is silhouetted (see picture on the right), thus eliminating other qualities of the shape, such as texture and roundness, or the illusion of the third dimension.

o Patterns: Creating your pictures around repeating elements or patterns provides picture unity and structure. Pattern repetition

creates rhythm that the eyes enjoy following (fig. 5-15). When lines, shapes, and colours within a picture occur in an orderly way (as in wallpaper), they create patterns that often enhance the attractiveness of photographs. Pattern, like texture, is found almost everywhere. It can be used as the primary subject but is most often used as a subordinate element to enhance composition. When pattern is used as a supporting element, it must be used carefully so it does not confuse or overwhelm the viewer. Pictures that are purely pattern are seldom used, because they tend to be monotonous. Patterns should be used to strengthen and add interest to your subject.
o Textures: Texture helps to emphasize the features and details in a photograph. By capturing "texture" of objects being photographed, you can create form. When people observe a soft, furry object or a smooth, shining surface, they have a strong urge to touch it. You can provide much of the pleasure people get from the feel of touching such objects by rendering texture in your pictures. Texture can be used to give realism and character to a picture and may in itself be the subject of a photograph. When texture is used as a subordinate element within the picture, it lends strength to the main idea in the photograph. It usually takes just a little different lighting or a slight change in camera position to improve the rendering of texture in a picture. When an area in a photograph shows rich texture, the textured area usually creates a form or shape; therefore, it should be considered in planning the photograph (image on the right.
o Colour (Tone): Tone is probably the most intangible element of composition. Tone may consist of shadings from white-to-gray-to-black, or it may consist of darks against lights with little or no greys. The use of dark areas against light areas is a common method of adding the feeling of a third dimension to a two-dimensional black-and-white picture. The interaction of light against dark shades in varying degrees helps to set the mood of a composition. A picture consisting of dark or sombre shades conveys mystery, intrigue, or sadness. When the tones are mostly light and airy, the picture portrays lightness, joy, or airiness.




Finally, there are photographic elements that add strength to the image. These elements include such aspects as:
o Format (portrait or landscape)
o Placement of the main elements
o Lens Selection
o Focusing
o Perspective: The human eye judges distance by the way elements within a scene diminish in size, and the angle at which lines and planes converge. This is called linear perspective. The distance between camera and subject and the lens focal length are critical factors affecting linear
perspective. This perspective changes as the camera position or viewpoint changes. From a given position, changing only the lens focal length, and not the camera position, does not change the actual viewpoint, but may change the apparent viewpoint. The use of different focal-length lenses in combination with different lens-to-subject distances helps you alter linear perspective in your pictures. When the focal length of the lens is changed but the lens-to-subject distance remains unchanged, there is a change in the image size of the objects, but no change in perspective. On the other hand, when the lens-to-subject distance and lens focal length are both changed, the relationship between objects is altered and perspective is changed. By using the right combination of camera-to-subject distance and lens focal length, a photographer can create a picture that looks deep or shallow. This feeling of depth or shallowness is only an illusion, but it is an important compositional factor. Using a short-focal-length lens from a close camera-to-subject distance, or viewpoint, produces a picture with greater depth (not to be confused with depth of field) than would be produced with a standard lens. Conversely, using a long-focal-length lens from a more distant viewpoint produces a picture with less apparent depth.
One method of creating strength in an image is to create focal points that draw the viewer's eye to that area. Focal points compel the viewer to look at them first. There are several techniques that create strong focal points. First, the photographer can isolate the subject. Throwing everything in the scene out of focus except for the main subject is one example of this technique. The viewer's eye is attracted to whatever is sharp in the image. The viewer's eye generally will not remain very long in an area that is out of focus. However, when everything is in sharp focus, the image becomes cluttered and won't hold the viewer's attention. Having too many things to look at causes fatigue in the viewer's eye!  
Having a contrast in tone or colour between parts of the image is another method that creates a strong focal point. When you're dividing the image space by tone or colour, it's important to examine how the division occurs. If the image is equally divided between two tones, the viewer becomes confused, because each portion of the image has equal weight. For example, consider the classic sunset image. If the horizon line is placed in the centre of the frame, both the sky and water take up an equal amount of space. The viewer feels uneasy, because the photographer didn't provide any visual clues as to what is most important in the scene. This type of image lacks strength, and the viewer will quickly abandon it. One curative option is to lower the horizon, which places emphasis on the clouds in the sky. Raising the horizon places emphasis on the reflections in the water. Which is best? The photographer must decide whether the sky or water is more attractive. If the photographer can't decide and splits the frame equally, his/her indecisiveness will be apparent to the viewer.

Placement of elements in the frame can also create focal points. Key placement questions to consider include what, how, and where to place elements in the scene. You should articulate what attracts you in the scene. That will dictate what to place in the final image. If the photographer can't articulate what causes his/her personal passion in a scene, passion won't come across to the viewer. How you place something in the image refers to whether the element is fully or partially visible. Showing the entire element increases the attentive values of that element. Partially showing the element decreases the emphasis on that element. When you want to stress the relationship between two elements in the scene, rather than the elements individually, place them partially out of the image or near the edges of the frame. Where to place the main elements in the image is the final consideration for attracting the viewer's attention. The Rule of Thirds is the most common method for determining where to place the main elements. It's based on the concept that the strength of an image improves when the main elements are placed at key locations away from the centre of the frame.

We've been programmed to locate main elements in the centre of the frame. Do you remember when you were a child, and the teacher told you to draw a red flower with your crayon? Where did you place it? You probably began in the centre of the page. Why? There was lots of room there, so you could draw the entire flower. Your first camera was probably of the point-and-shoot variety. The only area that confirmed the subject was in focus was the focus point in the centre of the camera lens. If you can determine focus in the centre of your field of view, isn't it logical to place your subject there? The problem, of course, is that placing the subject in the centre of the frame normally provides little interest for the viewer. The brain is logical. If the brain subconsciously expects to find something in the centre of a picture, and it's located there, no excitement is generated. Placing the subject away from the centre provides visual stimulation.
Rule of Thirds 
Before talking about when it's permissible to break the Rule of Thirds, let's make sure that we understand how it works. Several schools of thought in ancient Greece searched for mathematical formulas for the perfect number, chord, etc. They also searched for perfect balance in their artwork. Renaissance architects and painters continued the search for perfection. They decided that the relationship of five to eight created such balance. Divide the length of the canvas (or picture frame) into eight parts, and at the fifth mark from the left, draw a line from top to bottom. Count five parts, starting from the opposite side, and do the same thing. Draw two lines in the same manner from the width of the frame, and the end result is figure 1 on the left. This is called the Golden Triangle because it represents the perfect division of space. The points where the lines intersect are called power points. Placing your main subject at one of the power points gives it a high attentive value and adds strength to your image. If there's more than one main subject, placing each at a power point provides balance and strength.





It's difficult to visually divide the viewfinder into eight equal parts. Therefore, it's easier to use the Rule of Thirds, which divides the viewfinder into three sections, both horizontally and vertically. As you can see from Figure 2 (see image on right), the Golden Mean is a tighter grouping than the Rule of Thirds. Both methods use the power point concept for placing the main subject(s).



The image below (the rose surrounded by baby's breath) demonstrates locating the subject according to the Rule of Thirds. The placement, as well as the colour contrast, almost requires the viewer's eye to go to the rose first. After stopping at the rose, the eye is free to wander about the rest of the image to explore its content. Therefore, the image has both strength and clarity.


Consider the Rule of Thirds to be the Guidelines of Thirds. If the main subject is always placed at one of four points in the frame, creativity suffers. There are many situations where using the Rule of Thirds will enhance the image. Other situations require more creativity, and that means bending or breaking this rule.

The Rule of Thirds discourages placing an important element in the centre of the frame. However, there are two situations when a centrally placed element works effectively. The first situation arises when there's nothing else in the scene that competes with the main subject. If a flower is in sharp focus and everything else is out of focus, the viewer's eye will go to the flower--no matter where it's placed in the scene. Placing the flower in the centre of the frame works, in this instance, because the flower is a complete subject on its own, and there are no other elements to compete with the flower.

The other situation in which a centrally placed element works occurs when there's a strong sense of balance in the scene. Imagine the hub of a wooden wagon wheel. The hub can be placed in the

centre, because the radiating spokes suggest a strong balance within the scene. Placing a strong horizontal line in the centre of the frame works only when one half of the scene is reflected in the other half. Notice that the image below has a strong horizontal line (tree line) in the centre of the frame. The image works due to the strong sense of balance in the scene. In this case, placing the horizontal line anywhere else in the frame would degrade the image dramatically.

As stated previously, placing the horizon in the centre of the frame can confuse the viewer as to what's important. The underlying structure of the Rule of Thirds allows us to modify the location of the horizon to send a clear message to the viewer. The Rule of Thirds can be used to visually weight an image. Visual weight differs from physical weight. Light colours have less visual weight than dark colours when they fill approximately the same amount of space in the frame. Thus, a large mound of dark feathers appears heavier than a white rabbit of equal size. Also, an element that takes up more physical space in the frame has more visual weight than an element that uses less space.

We can bottom weight an image by placing the top of our visually weighted element along the lower horizontal line of our Rule of Thirds grid. Locating the top of the element below the lower horizontal line places gives it less emphasis. It's up to the photographer to determine how much emphasis should be placed on each element in the scene. The image below is an example of a bottom-weighted image.



Placing the visual weight at the bottom of the image puts emphasis on the upper portions of the image. In the image to the left, it's really the interesting clouds that make the image. The mountains simply provide a sense of place. If the mountains were seen higher in the image, they would detract from the clouds. The image would change and not be as interesting.



We can also top-weight an image by placing it along the upper horizontal line in the Rule of Thirds grid. The two images below are both top-weighted. You probably get a different feel from each of them--even though they're both images of the Grand Tetons taken from the same tripod holes. The difference is that the image on the right has a stronger base. When you build a house, it needs to have a strong foundation to stand.




The same is true with an image. The Grand Tetons have a lot of visual weight. The viewer can easily determine that they are heavy. In contrast, the grassland in the foreground of each image doesn't represent weight. Top-weighting an image without a strong base makes the weighted object appear to be floating on a surface that won't support it. Therefore, the viewer senses something doesn't appear right in the image, even if s/he can't verbalize the problem. The above images are extreme examples. The image on the above left has a weak base because the bottom of the mountains is too high. The image on the right represents moving the bottom of the mountain to an extremely low base. It sends a better message about the solid feel of the image. Probably the best location for the bottom of the mountain would be somewhere between both images.

A weak base is especially obvious in top-weighted images involving water in the foreground. Unless there's some other foreground object, the viewer can feel uncomfortable with nothing but water supporting the mountain, city buildings, or other objects. Place a finger over the bottom third of the image to the left. When you cover the rocks in the foreground, do you get the feeling that the mountain is floating on the water? We know that mountains can't float, so the viewer may feel some negative tension from the image.

In top-weighted images, the photographer must decide whether a top-weighted image is supported by the foreground and how much foreground to include. Although it's your decision, be aware of the concept of base and potential viewer reaction to the shot.

Jim Altengarten used the horizontal grid lines of the Rule of Thirds to create either top or bottom weight in our image. The vertical grid lines can also be used; it's called side-weighting. The image below right is an example of using the left vertical line of the grid to locate the main element of the image. Notice that the small stream of water is placed along the other vertical line in the grid. Placing the main element closer to the edge puts less emphasis on that element. On some occasions, leaving part of the element out of the scene creates an emphasis on the relationship between that element and another element in the scene.

While the main element can be placed on either vertical line, care must be taken to avoid creating negative tension. If there's any action, or implied action, in the scene, the action should normally be located toward the centre of the frame. For example, if the main element of the scene were a bicyclist, the bicycle would move from the edge of the frame toward the centre. If the bicycle were located at either vertical line and appeared to move toward the closer edge of the frame, the viewer might wonder where the bicycle will go once it leaves the frame. This situation is called amputation, because the edge of the frame cuts off the ability of the viewer to follow the anticipated action. Any implied action, such as a person looking out of the frame, can cause the same result.

Counter culture placement of the subject is another way of increasing tension in a photo. In western culture, movement is generally left to right. That's how you're reading this page. If the movement in the scene is from right to left (even though it's moving toward the centre), it can create negative tension for western viewers. The next set of three images shows a wolf looking in different directions. You'll probably receive a different feeling from each of the images--depending on the direction of the wolf's stare. Do any of the images give you a feeling of nervousness or curiosity?




Keep in mind the earlier statement that rigidly following rules discourages creativity. There may be occasions when you want to add negative tension to a scene to create a certain mood. Intentionally creating a feeling of amputation can add mystery. Counter-cultural movement inserts a subtle tension that many people feel but can't verbalize. The question boils down to the photographer being able to say what's important in the scene, and then to create circumstances that will allow the viewer to receive the intended message.

Thus far, we've discussed the Rule of Thirds as a basic model and expanded it into a creative approach for placing the main subject in the frame. The preceding suggestions will add strength and generate viewer attention to your images. The Golden Mean and Rule of Thirds provide a sense of order, balance, and beauty to the image. But is this all we want to say in photography? Using only the Rule of Thirds will eventually create monotonous, boring shots where placement is always the same as regulated by the rule. To maintain viewer interest, you need variety, and that comes from creative placement. Let your creativity be your guide!

Sunday 2 December 2012

Student Perception and Credibility of Student’s Opinion

It is difficult to divorce student’s positive perception of an institution from desire to secure admission in it – assuming, there are no other major variables influencing the outcome of the decision. The perception tends to be formed, in my opinion, on social interaction and active feedback mechanism from alumni, peer groups, and institution’s enrolled students. These views are cross checked with reference groups, community leaders/elders including parents and other siblings, as well. I believe that the notion of students forming opinions about institutions based on heresy, rumours, and superfluous or supercilious parameters is not exactly true! 

The younger generation, on an average is ‘performance’ conscious and does like to evaluate institutions based on ‘effective educational practices,’ ‘environment & culture,’ ‘industry & job market’s assessment of an institution,’ ‘affordability,’ and faculty performance. I would go to the extent that often these perceptions are based on outcomes of informal, as well as formal research undertaken by the potential enroller of all ‘higher education.’ There may exist indications that ‘disciplinary culture,’ ‘opportunities to engage in personal growth, confidence building, and extra as well as co-curricular activities’ also tend to influence today’s average student in deciding to join an institution. I am not sure to what level parent’s/family’s ‘economic reality’ factor plays a crucial role in ‘college/university’ decision – especially, in today’s world where, even in traditional eastern societies, exists a trend of students beginning to ‘work’ to pay for or/and contribute towards the cost of education. 

Yes, indeed, admissions are also sought in several institutions because of certain snob value, however, to retain ‘snob value,’ these institutions have to retain their ‘core values’ (the ones discussed above) and extrinsic value (which stems from patronisation by ruling elite class). You may note that even these institutions vie to enroll accomplished and promising students (even if from lower economic strata to retain its reputation as premier academic institutions.

I remember reading a study (Umbach & Wawrzynski, 2005), which concludes that students tend to demonstrate higher level of engagement and learning at institutions where “faculty members use active and collaborative learning techniques, engage students in experiences, emphasize higher-order cognitive activities in the classroom, interact with students, challenge students academically, and value enriching educational experiences.” (p. 153)

Then there is an issue concering credibility of student's opinion or perception. How does one justify a study that is based on student opinion? Is it more to do with justifying or negating the exercise of student feedback to gauge the quality of an academic institution!

I will like to draw attention to a paper on student perceptions of faculty credibility based on email addresses (Livermore, Scafe, & Wiechowski, 2010). The idea is not to belittle your concern but to accentuate. The survey conducted indicated that “… a faculty member’s selection of an email address does influence the student’s perception of faculty credibility. An email address that consists of a nickname reduces the student’s perception of faculty credibility. The reduced creditability may have a negative impact on the faculty member as well as the college.” (p. 27)

It is apparent that student’s perception plays an important role in not just faculty’s credibility but the institutions credibility, as well. The lack of credibility, as some researchers point out, is linked to perceived learning (Russ, Simonds, & Hunt, 2002); (Glascock & Ruggiero, 2006). In today’s highly competitive and cut throat business, most colleges are struggling with increased competition, decreasing share of pie, and uncertain economic scenario – given the scenario, student satisfaction assumes a greater importance. A lack of perceived learning is likely to reduce a student’s satisfaction, leading to dwindling enrollment numbers. So, it will be right to assign greater importance to Student Perceptions despite the credibility issue.

Given the fact that there is an increased investment in education industry from the private sector, which adheres to ‘Profit Model’ for all of its socio-economic activities, client perceptions will always be important. It may be noted that most successful business enterprises decide to change with times, strategically re-invent themselves, re-brand or re-position themselves, to meet their customer’s expectations… Only, in this case student has become the customer.

Bibliography

Glascock, Jack and Thomas E Ruggiero. "The Relationship of Ethnicity and Sex to Professor Credibility at a Culturally Diverse University." Communication Education 55.2 (2006): 197-207.
Livermore, Jeffrey A, Marla G Scafe and Linda S Wie. "Student Perceptions Of Faculty Credibility Based On Email Addresses." American Journal of Business Education (AJBE) 3.4 (2010): 27-32.
Russ, Travis L, Cheri J Simonds and Stephen K Hunt. "Coming out in the classroom... An occupational hazard? The influence of sexual orientation on teacher credibility and perceived student learning." Communication Education 51.3 (2002): 311-324.
Umbach, Paul D and Matthew R Wawrzynski. "Faculty Do Matter: The Role of College Faculty in Student Learning and Engagement." Research in Higher Education, Vol. 46, No. 2, March 2005 46.2 (2005): 153-184.




Tuesday 13 November 2012

Understanding Television Programing: An Introduction

Television Programing

A television program is a part of programming in television broadcasting. The program may be a one-off broadcast (this is also called a special) or part of a periodically returning television series. Conventionally, television programming is classified as following:
  1. Serials: A television series that is intended to air a large number of episodes. When the number is rather small say around 5 to 7 then a term ‘miniseries’ is often used.
  2. Episode: A single instance of a programme is called an episode, "show" or "program".
  3. Television Movie: A television movie is a movie that is initially aired on television. It is not meant to be released directly in theatres or direct-to-video, although many successful television movies are later released on video. Produced for and originally distributed by a television network. Earlier they were meant to fit a 90-minute time slot (including commercials); it later expanded to two hours, and was usually broadcast as a weekly anthology series (for example, the ABC Movie of the Week) called a mini series.
    Example The most-watched TV movie of all time was ABC's ‘The Day After’, which aired on November 20, 1983, to an estimated audience of 100 million people. The film depicted America after a nuclear war with the Soviet Union.
TV miniseries is an extended film, having a small number of episodes and a set plot and timeline. Miniseries usually range from about 3 to 10 hours in total length. In Dubai, MBC 4 used to air miniseries like Doctor Zivago every Thursday for one hour comprising about 2 -3 episodes. In the U K, the term miniseries is used in reference to imported programs, and such short-run series are usually called "serials".
Advertisements play a role in most television programming, such that each hour of programming contains approximately 15 minutes of commercials. This again is dictated by the popularity of the program that is decided by ratings. A significant drop in the ratings could mean death to a series. Nielsen Ratings was developed by Nielsen Media Research it helps determine the audience size and composition of television programming. These ratings are undertaken in two ways; one is by attaching a set meter to TVs in homes. These devices gather the viewing habits and transmit the information to broadcasting house or Rating organisation through a home unit to a telephone line. The other method is surveys various viewers are asked to keep a written record of the program they watch during the day.
Program Content
The content of television programs may be factual, like documentaries, news, and reality television, or fictional as in comedy and drama. It may be topical like news and some made-for-television movies or historical as in the case of such documentaries or fictional series. It may be mostly instructional as in the case of educational programming, or entertaining as is the case in situation comedy, reality TV, and variety shows, or for income as advertisements.
While television series appearing on TV networks are usually commissioned by the networks themselves, their producers earn greater revenue when the program is sold into syndication. The Oprah Winfrey Show is a syndication it is aired on Mbc4, Star plus etc.
With the rise of the DVD home video format, box sets containing entire seasons or the complete run of a program have become a significant revenue source as well. For example after every season of the popular serial ‘Friends’, the DVD of the season completed is released.
Programming
Getting TV programming shown to the public can happen in many different ways. After production the program is marketed and delivered to whatever markets are available to it.
This typically happens on two levels:
1. Original Run or First Run - a producer creates a program of one or multiple episodes and shows it on a station or network which has either paid for the production itself or to which a license has been granted by the producers to do the same.
2. Syndication - this goes beyond the original run .It is not the secondary runs the other channels pay a fee for the serial. It includes international usage which may or may not be managed by the originating producer. In many cases other companies, TV stations or individuals are engaged to do the syndication work, to sell the product into the markets they are allowed to sell into by contract from the copyright holders, the producers. Example - MBC 4 shows CBS ‘Up to the Minute.’
Genres
Scripted entertainment
The scripts are prepared, dialogues are written etc
Dramatic television series (including drama, police procedural, serial drama, science-fiction, or soap operas), (24, the medium, Monk) or Television comedy (The 70’s show), Animated television series, Miniseries and TV Movies, Award shows (the Oscars).
Semi- scripted entertainment
These are partially scripted they follow a plan then changes as the show progresses .Talk shows( Dr Phil), The Tyra Banks Show), Game shows The Bournvita Quiz Contest).
Unscripted entertainment
Reality television (Big Brother), News programs (CBS ‘Up to the minute’), Documentary (Wildlife on National Geographic), Television news magazine, dealing with current affairs are some of the examples.
The shots in every scene is carefully planned and arranged to bring in a feeling of reality if a scene looks real the audience will accept it. The illusion of reality is created through Mise-en-scene both in TV and cinema.
Mise-en-scène: [mizɑ̃sɛn]) is an expression used in the theatre and film worlds to describe the design aspects of a production. It has been called film criticism's "grand undefined term," but that is not because of a lack of definitions. Rather, it's because the term has so many different meanings that there is little consensus about its definition.
Stemming from the theatre, the French term mise en scène literally means "putting on stage." When applied to the cinema, mise en scène refers to everything that appears before the camera and its arrangement – sets, props, actors, costumes, and lighting.[1] Mise en scène also includes the positioning and movement of actors on the set, which is called blocking. These are all the areas overseen by the director, and thus, in French film credits, the director's title is literally “mise en scène. “
This narrow definition of mise en scène is not shared by all critics. For some, it refers to all elements of visual style – that is, both elements on the set and aspects of the camera. For others, such as U.S. film critic Andrew Sarris, it takes on mystical meanings related to the emotional tone of a film. Semiotics is a part of Mise-en-scene, the latter happens to be an umbrella term.
The theory and study of signs and symbols, especially as elements of language or other systems of communication, and comprising semantics, syntactic, and pragmatics, is called semiotics. It studied in terms of codes.
· Social codes like Appearances of the actors and the set (refer to the dress and gestures of the characters that signify specific meanings for the show). These codes help signify the characteristics of the characters, allowing an audience to understand the characters.
· Technical codes will be Opening credits, Camera angles, Closing credits, editing.
· And last but not the least, Representation codes, which include dialogue, audience laughter, and Stereotypical signs. For example, in the sitcom Frasier, the way Frasier’s apartment is designed, the colours used or the artefacts on the showcase all reflect his rich taste for the good things in life.
Television programs are read by decoding the dominant signs portrayed through such each scene. This process may seem unconscious to an audience because the dominant signs portrayed through each scene have become instilled as common-sense within society. Mise-en-scene is about cinematic space - what occurs in a defined area of space, bordered by the frame screen and determined by what the camera has been made to record.
The elements of Mise-en-scène: examples.
Décor
Décor comprises of the objects contained in and the setting of a scene. It is used to enhance character emotion or the set the dominant mood of a film. For example, in first shot taken from 2001: A Space Odyssey (Stanley Kubrick, 1969) the futuristic furniture and reduced colour scheme stress the sterility and impersonality of the space station environment. In the picture beside it the digital nature of the HAL computer is represented by the repeating patterns and strong geometrical design of the set.
Rear Projection
The foreground part of the scene it shot separately background often shot earlier, on location these are merged together. Rear projection provides an economical way to set films in exotic or dangerous locations without having to transport expensive stars or endure demanding conditions. In some films, the relationship between scenes shot on location and scenes shot using rear projection becomes a signifying pattern.
Example: Rear projection is featured extensively in Douglas Sirk's lush melodrama Written On The Wind (1956). Specifically, almost every car ride is shot in this way, due to the restrains of shooting in the studio. By speeding up the rate of the projected images in the background, or quickly changing its angle, rear projection allows for an impression of speed that involves no real danger.
Lighting
One of the best ways to enhance the impact of ones video images is through the creative use of lighting. This is, perhaps, the most important element in maximizing the image quality one is capable of producing with any camera or video format. Good lighting can make a Standard Definition camcorder shine, while poor lighting can make the most expensive professional High Definition video camera look inferior. In more subtle ways, lighting can have a tremendous impact on how the viewer perceives your video. Through creative lighting we can establish a mood or the time of day, enhance the illusion of three dimensionality, reveal or obscure visual information, and create an artificial reality.
Good lighting is not just the province of the people in Hollywood. One do not need a big budget and lots of expensive lighting equipment to do creative lighting. There are lots of inexpensive tools and simple tricks that you can use to enhance your lighting. But first, let's talk about some of the basic principles of lighting for video.
Mood lighting refers to the use of light to illuminate an object or background in a purposely to evoke a certain mood or emotion. It is very subtle. Evil characters are usually illuminated from beneath the chin giving them a certain eerie and demonic appearance.
Basic Elements of Lighting
Like stated earlier, good lighting are not just the province of Hollywood professionals. Let’s talk about some of the basic principles of lighting for video.
One of the more important realizations in learning about lighting is that lighting is not necessarily the art of adding light to a scene. Instead, as Tom Le Tourneau so aptly put it in his book, Lighting Techniques For Video Production, lighting is "the art of Casting Shadows". For some of you, that may be a radical statement. I myself spent years trying to eliminate ugly shadows in my scenes, typically by adding more and more lights. I had forgotten what had been taught to me… ‘to paint with light and shadow’
But, stop and think about it. In movies and TV shows, frequently it’s the depth and placement of shadows that make the lighting so dynamic and evocative. Shadows, from subtle to dark, help give a scene 3 dimensionality and help establish the mood of the scene. Shadows can also establish the time of day, hide or accentuate features of the set or actors, and suggest set elements, such as windows, which don't really exist. So, an important element toward creating better lighting is to start to look at lights as not only sources of illumination, but also as shadow generators.
Seeing Lighting
This also leads us to a valuable skill that is important to develop-the ability to SEE how light and shadows fall on objects and people around us, and in the scenes that we shoot. You have to make a concerted effort to look past everything else and concentrate on the subtleties of colour, shadow, and highlights which are created through lighting.
For example, in the black & white picture on the right, notice how the directional light falls on the girls face. Features near the source of light are strongly lit, with details accentuated by highlights, while the opposite side of the face is darker and less defined. Also observe how the shadows contour her cheeks and define her hair. The nose shadow gives the height and angle of the source.
In case of the black and white portrait on the left, try reading the source of diffused nature light source. The near half dome nature of lighting.
All around us are wonderful examples of how light defines our environment. You just have to LOOK.
Now that you're looking at the effect of natural light falling around you, we need to discover how we can recreate and enhance those effects in our videos.
Here notice how the light from a window falls on objects in a room. Objects near the window are strongly lit on the side facing the window, with details accentuated by highlights, while the opposite side of the objects are darker and less defined. Also observe how the wall opposite the window softly reflects light back into the room helping to illuminate the side of the subjects away from the window.
All around us are wonderful examples of how light defines our environment. You just have to LOOK.
Now that you're looking at the effect of natural light falling around you, we need to discover how we can recreate and enhance those effects in our videos.
Let's look at the four basic elements of lighting:
1. Direction,
2. Quality,
3. Lighting Ratio, and
4. Control.
Each of these elements contributes to the overall effect of our lighting and need to be considered when we design our lighting.
Let's look at the basic element: Direction
The direction of light is specifically related to the height and angle of the lighting source. Height refers to where the light source is placed above ground level. Is it above, below, or even with the subject? Angle refers to the slope of the light's beam. Together, height and angle determine where the highlights and shadows fall on your subject.
Placement of the light source directly above the heads of the subjects creates a different effect than placing the source at ground level and pointing up at the subjects.
Down Angle
In a still from ‘Men In Black ’ by Columbia/Tri Star Motion Picture Companies, one can see that placement of the light source above the subjects and angled straight down results in a glowing effect on the tops of heads and shoulders while the face and body are shadowed.
This lighting effect might suggest an interrogation room or spiritual encounter. In this example the subjects look subservient to the light source which represents an entity of higher power.
Up Angle
The second still which is also from ‘Men In Black ’ by Columbia/Tri Star Motion Picture Companies, one can see that light placed on the ground and aimed up at the subjects will produce a dramatically different effect.
Unusual shadows are created by placing the light low and in this case from behind. This lighting design creates a sinister or otherworldly effect. In this example, the subjects are made to look powerful and threatening.
These two examples are extremes; most non-theatrical projects don't involve sinister villains or powers from above. However, they point out how strong the relationship is between angle and height and how they affect the viewer's perception of a scene. At the least, you need to be careful not to imply something you don't intend by poor placement of your light source in relation to your subject. On the other hand, thoughtful placement of your lights can enhance and improve the look of your subject or scene.
Let's look at the second basic element: Quality
The quality of light relates to the hardness or softness of the light striking the subject.
Hard light is characterized by sharp beams of light with distinct edges between light and shadow. Hard light typically produces distinct dark shadows.
Here in the still from ‘Pirates of Penzance‘, provided by the Theatre Arts Department, California State University, Fresno, we see a good example is the spotlight which bathes stage performers in light while throwing a distinct circular pattern of light and shadow around the performers. This type of lighting is useful for creating drama and excitement and is often associated with night scenes.
Soft light, on the other hand, caresses the subject with the transition from light to shadow diffused. Soft light is used frequently in television shows and commercials because it is very complimentary to the subject and can help to diminish harsh shadows.
Effects of a soft light may clearly see in a still from a film, ‘Nebraska’ Nebraska Cinematographed by Chuck Barbee. However, the effect of using soft light is further accentuated by use of blue tones.
Let's look at the third basic element: Lighting Ratio
Lighting ratio refers to the difference in brightness from the lightest area of a subject to the darkest. This brightness difference is described by a numerical ratio that defines how many times brighter the brightest area is compared to the darkest area. For example, a 2:1 lighting ratio means that the brightest area of lighting on the subject is twice as bright as the darkest.
Here is an example of different Lighting Ratios: Depending on the sensitivity of the camera, video can accommodate up to about an 8:1 lighting ratio before the shadow areas loose all detail. The more commonly used lighting ratios for video are 2:1, 3:1, and 4:1
Moving on, let’s look at the fourth basic elements of lighting: Control.
Control refers to the methods we use to shape and colour the light emitted from our light sources. Part of the beam of a light could be blocked in order to create a shadow in a specific area of the subject. The film noir movies of the 40's made frequent use of shadow placement to heighten the drama.
In case of the still from ‘The Sweet Hereafter’, cinematographed by Paul Sarossy, the light through the window is significantly blocked so that a distinct shadow falls across the subject's face, with only part of the subject's face lit by sunlight.
Another method of controlling light is to place translucent material in front of the light which alters the light's beam or colour. Rock concerts achieve their colour lighting effects with this method.
THREE-POINT LIGHTING
In lighting, as in most creative endeavours, there are basic design guidelines. Mastering these guidelines provides a firm foundation for the development of ones lighting skills and provides a starting point for more creative and daring lighting designs. One of the basic guidelines for designing a workable lighting design is called 3-point lighting. This is a standard lighting scheme for most classical narrative cinema. In order to model an actor's face with a sense of depth, light from three directions are used as demonstrated in this figure.
In other words, the 3-point lighting design uses three light sources to illuminate the subject, provide shape and 3 dimensionality, and separate the subject from the background. These three light sources are the Key light, Fill light, and Back light.
• The KEY light is the dominant light source striking the subject. It is also called a principal modelling light.
• The Fill light, placed the opposite side, ensures that the shadows cast by the Key light are filled up… hence the name. And,
• The BACK light is placed behind the subject. It provides separation to the subject from its background.
The KEY, FILL and BACK lights represent the 3-points of a basic lighting design.
Three Point lighting
One of the basic guidelines for designing a lighting design is called the 3-point lighting. The 3-point lighting design uses three light sources to illuminate the subject, provide shape and 3 dimensionality, and separate the subject from the background. These three light sources are the Key light, Fill light, and Back light.
The KEY light is the dominant light source striking the subject. Typically, the key light is at least twice as bright as the fill light. In the basic 3-point design, the KEY light is placed 45 degrees to the side of the subject and at a 45 degree angle above the subject.
The Fill light is placed on the opposite side of the subject from the key light and at approximately the same height.
The Fill light, though placed on the opposite side of the subject from the key light, is placed approximately at 23 degrees as opposed to 45 degrees. Usually, the fill light is at least half as bright as the key light.
The BACK light is placed behind the subject, again at about a 45 degree angle above and behind the subject. The brightness of the back light can range in intensity from the level of the fill light to that of the key light, depending on the reflectivity of your subject. For example, a person with blond or gray hair needs far less back light than someone with brown or black hair.
MEASURING LIGHTING
To determine the amount of light coming from each of the light sources namely the Key, fill and backlight Light Meter has to be used. Light meters are hand-held devices which measure light. There are two basic types of light meters, Incident and Reflectant.
Incident light meters measure the amount of light striking the subject. Measurements are taken at the subject's location with the meter pointing at the light source to be measured.
Reflected light meters measure the amount of light reflected by the subject. Measurements with this meter are taken from the camera's location with the meter pointed at the subject. (The automatic meter which adjusts the iris camcorders is a reflectant light meter.)
NOTES for Distribution:
Use a meter reading as a guideline rather than a dictate for correct exposure. This makes it important that you understand how your particular meter works so you can consistently get good results no matter what the lighting. The place to begin this understanding is the instruction manual that came with your meter or camera. The instructions should familiarize you with the meter's specific features, its flexibility, and its limitations. Most camera and exposure meter instructions provide the basic techniques of light measurement and mention some of the situations that may "fool" the meter. If you can't find the instructions, write to the manufacture for them.
USING REFLECTED-LIGHT METERS
Once you have set the proper film speed on your camera or meter, you are ready to make the exposure-meter reading. With a reflected-light meter (in camera or handheld), point the camera or meter at the subject. The meter will measure the average brightness of the light reflected from the various parts of the scene. With an in-camera meter, a needle or diode display in the viewfinder or an LCD display on top of the camera will tell you when you have achieved the proper combination of lens and shutter-speed settings. If the camera is fully manual. You must set both the aperture and shutter speed. Automatic cameras may set both shutter speed and aperture; or they may set just one of the controls, leaving you to set the other.
If you're using a handheld meter, read the information on your meter and set the camera controls accordingly. An overall exposure reading taken from the camera position will give good results for and average scene with an even distribution of light and dark areas. For many subjects, then, exposure-meter operation is mostly mechanical; all you do is point the meter (or camera) at the scene and set the aperture and shutter speed as indicated. But your meter does not know if you need a fast shutter speed to stop action or a small aperture to extend depth of field. You will have to select the appropriate aperture and shutter combination for the effect you want. There will be other situations where either the lighting conditions or the reflective properties of the subject will require you to make additional judgements about the exposure information the meter provides, and you may have to adjust the camera controls accordingly.
A reflected-light meter reading is influenced by both how much light there is in the scene and how reflective the subject is. The meter will indicate less exposure for a subject that reflects little light, even if the two subjects are in the same scene and in the same light. Because reflected-light meters are designed to make all subjects appear average in brightness, the brightness equivalent to medium gray, they suggest camera settings that will overexpose (make too light) very dark subjects and underexpose (make too dark) very light subjects.
Although reflected-light meters are influenced more by the largest areas of the scene, the results will be acceptable even when the main subject fills the picture but it's still of average reflectance (neither very light nor very dark). However, what happens if a relatively small subject is set against a large dark or light background? The meter will indicate a setting accurate for the large area, not for the smaller, but important, main subject. Therefore, when the area from which you take a reflected-light reading is very light or very dark, and you want to expose it properly, you should modify the meter's exposure recommendation as follows:
• For light subjects, increase exposure by 1/2 to 1 stop from the meter reading.
• For dark subjects, decrease exposure by 1/2 to 1 stop from the meter reading.
Selective Meter Readings
To determine the correct exposure for high contrast scenes with large areas that are much darker or much lighter than the principle subject, take a selective meter reading of only the subject itself. How do you do this? Move the meter or camera close to the subject. Exclude unimportant dark or light areas that will give misleading readings. In making close-up readings, also be careful not to measure your own shadow or the meter's shadow.
Selective meter readings are useful for dark subjects against a bright background like snow or light sand, or for subjects in shade against a bright sunlit background. There is also the reverse of this: The subject is in bright sun and the background is in deep shade. In all these situations, your camera has no way of knowing which part of the scene is the most important and requires the most accurate exposure, so you must move in close so the meter will read only the key subject area. For example, if you want to photograph a skier posed on a snowy slope on a bright, sunny day, taking an average reading of the overall scene will result in underexposure. The very bright snow will overly influence the meter and the reading will be too high. The solution is to take a close-up reading from the skier's face (or a piece of medium-toned clothing) and then step back the desired distance to shoot the picture. Some cameras with built-in meters have an exposure-hold button or switch to lock the exposure setting when you do this. This technique is useful anytime the surroundings are much brighter or darker than your subjects.
Landscapes and other scenes with large areas of open sky can also fool the meter. The sky is usually much brighter than other parts of the scene, so an unadjusted meter reading will indicate too little exposure for the darker parts of the picture. One way to adjust for this bias without having to move in close is to tilt your lens or meter down to exclude the sky while taking your meter reading. The sky will probably end up slightly overexposed, but the alternative would be to find a different shooting position excluding most or all of the sky. There are also graduated neutral density floaters that work well in such situations. A neutral density filter absorbs all colours of visible light evenly, and you can position a graduated filter so that the darker portion is at the top of the image where it will darken the sky without affecting the ground below. Incidentally, some built-in meters are bottom-weighted to automatically compensate for situations like this, so check your manual.
Bright backlighting with the subject in silhouette can also present a challenge. With the light shining directly into the lens or meter, aiming the meter into the light can cause too high a reading. If you don't want to underexpose the subject, take a close-up reading, being especially careful to shade the lens or meter so that no extraneous light influences the reading.
Substitute Readings
What if you can't walk up to your subject to take a meter reading? For instance, suppose that you're trying to photograph a deer in sunlight at the edge of a wood. If the background is dark, a meter reading of the overall scene will give you an incorrect exposure for the deer. Obviously, if you try to take a close-up reading of the deer, you're going to lose your subject before you ever get the picture. One answer is to make a substitute reading off the palm of your hand, providing that your hand is illuminated by the same light as your subject, and then use a lens opening 1 stop larger than the meter indicates. For example, if the reading off your hand is f/16, open up one stop to f/11 to get the correct exposure. The exposure increase is necessary because the meter overreacts to the brightness of your palm which is about twice as bright as an average subject. When you take the reading, be sure that the lighting on your palm is the same as on the subject. Don't shade your palm.
Another subject from which you can take more accurate and more consistent meter readings is a KODAK Gray Card, sold by photo dealers. These sturdy cards are manufactured specifically for photographic use. They are neutral gray on one side and white on the other. The gray side reflects 18% of the light falling on it (similar to that of an average scene), and the white side reflects 90%. You can use a gray card for both black-and-white and colour balance. Complete instructions are included in the package with the cards.
Handling High Contrast
How do you determine the correct exposure for a high-contrast scene, one that has both large light and dark areas? If the highlights of shadow areas are more important, take a close-up reading of the important area to set the exposure. With colour slide film, keep in mind that you will get more acceptable results if you bias the exposure for the highlights, losing the detail in the shadows. In a slide, the lack of detail in the shadows is not as distracting as overexposed highlights that project as washed-out colour and bright spots on the screen. If you are working with black-and-white film, you can adjust the development for better reproduction of the scene contrast, particularly in highlights.
But what if the very light and very dark areas are the same size and they are equally important to the scene? One solution is to take selective meter readings from each of the areas and use an f-number that is midway between the two indicated readings. For instance, if your meter indicates an exposure of 1/125 second at f/22 for brightest area and 1/125 second at f/2.8 for the darkest area--a range of six stops--set your camera 1/125 second f/8. This is a compromise solution, but sometimes it is your only choice short of coming back another day or changing your viewpoint, and the composition of the picture, to eliminate the contrast problem.
USING SPOT METERS
Perhaps the best solution when you need a selective meter reading is offered by the spot meter. Handheld averaging meters generally cover about 30º; while handheld spot meters typically read a 1º angle. The angle of spot meters built into the camera is usually wider, about 3 to 12º. The biggest advantage of a spot meter is that is allows you to measure the brightness of small areas in a scene form the camera position without walking in to make a close-up reading. Since a spot meter measures only the specific area you point it at, the reading is not influenced by large light or dark surroundings. This makes a spot meter especially useful when the principal subject is a relatively small part of the overall scene and the background is either much lighter or darker than the subject. Spot meters are also helpful for determining the scene brightness range.
A spot meter can take more time to use since it usually requires more than one reading of the scene. This is particularly true when the scene includes many different bright or dark areas. To determine the best exposure in such a situation, use the same technique described previously for high-contrast subjects: Select the exposure halfway between the reading for the lightest important area in the scene and that for the darkest important area in the scene. Bear in mind, though, all films have inherent limits on the range of contrast they can accurately record. Remember too, you can sometimes create more dramatic pictures by intentionally exposing for one small area, such as a bright spot of sunlight on a mountain peak, and letting the dark areas fall into black shadow without detail. Spot meters are ideal for such creative applications.
USING INCIDENT-LIGHT METERS
To use an incident-light meter, hold it at or near the subject and aim the meter's light-sensitive cell back toward the camera. The meter reads the amount of light illuminating the subject, not light reflected from the subject, so the meter ignores the subject and background characteristics. As with a reflected reading, an incident reading provides exposure information for rendering average subjects correctly, making incident readings most accurate when the subject is not extremely bright or dark.
When taking an incident-light reading, be sure you measure the light illuminating the side of the subject you want to photograph, and be careful that your shadow isn't falling on the meter. If the meter isn't actually at the subject, you can get a workable reading by holding the meter in the same kind of light the subject is in. Because the meter is aimed toward the camera and away from the background light, an incident reading is helpful with backlit subjects. This is also the case when the main subject is small and surrounded by a dominant background that is either much lighter or darker.
The exposure determined by an incident-light meter should be the same as reading a gray card with a reflected-light meter. Fortunately, many scenes have average reflectance with an even mix of light and dark areas, so the exposure indicated is good for many picture-taking situations. However, if the main subject is very light or very dark, and you want to record detail in this area, you must modify the meter's exposure recommendations as follows:
• For light subjects, decrease exposure by 1/2 to 1 stop from the meter reading.
• For dark subjects, increase exposure by 1/2 to 1 stop from the meter reading.
You will notice that these adjustments are just the opposite from those required for a reflected-light meter. An incident meter does not work well when photographing light sources because it cannot meter light directly. In such situations you will be better off using a reflected-light meter or an exposure table.
If the scene is unevenly illuminated and you want the best overall exposure, make incident-light readings in the brightest and darkest areas that are important to your picture. Aim the meter in the direction of the camera position for each reading. Set the exposure by splitting the difference between the two extremes.
Actual measuring
Foot Candle meters are the most commonly used meter for video. These meters display the amount of light striking them on a scale calibrated in foot candles, from 0 to 500 and are not dependent on any other factors.
In order to get an accurate reading the meter needs to be placed immediately in front of the subject facing the light source to be measured. The easiest way to take the key, fill, and back light measurements is to do them one at a time, with the other two lights turned off. Starting with just the key light on, place the meter in front of the area of the subject struck by the key light. Aim the meter at the key light and note the number of foot candles. For example, if it reads 100 foot candles. Then next, turn on just the fill light and take another reading facing the fill light. If our intended lighting ratio is 2:1, then the fill light should read about 50 foot candles.
Now, turn on just the back light and take its reading. The back light should be somewhere between 50 and 150 foot candles, depending on the effect desired.
The final step is to re-measure the key, fill, and back light positions with all the lights on. This is important since where the illumination from the lights overlaps the intensity increases. Adjust the intensity of the lights as needed to maintain the desired lighting ratio.
Now that we know what three point lighting is and how to measure light intensity, let us acquaint ourselves with a concept of:
HIGH-KEY LIGHTING
A Technique used in filming or videotaping for television, it refers to the lighting of a scene to eliminate shadow areas. High-key lighting is usually associated with news, interview, or panel programs, which are basically not of a dramatic nature, thus the lighting director's attempt to give a bright, lively appearance to the scene.
In other words, a lighting scheme in which the fill light is raised to almost the same level as the key light. This produces images that are very bright and that feature few shadows on the subjects, thus low contrast. This was originally done partly for technological reasons, since early film and television did not deal well with high contrast ratios, but now is used to create an upbeat mood. It is often used in sitcoms and comedies. This bright image is characteristic of entertainment genres such as musicals and comedies such as ‘Everybody loves Raymond’.
• High Key images are considered happy and will show your subject as a tooth-paste poster.
• When looking at a High Key picture, you will probably notice two things right away. That is other than the happy-happy-joy-joy mood of the picture:
• The first thing is that the picture is bright. Yes - to create a high key image you need to set your exposure levels to high values. You will want to watch out, though not to over expose.
• The other noticeable feature of High Key images is the lack of contrast. In addition for the tone being bright, you will notice that it is almost even across the scene. This is achieved by carefully setting the lighting of the picture.
• Another feature, which needs closer attention to notice, is the lack of shadows in the picture. The shadows cast by the model (or subject) are suppressed by lighting in the scene.
Both High Key images and Low Key images make an intensive use of contrast, but in a very different way. When approaching a shoot of a dramatic portrait, the decision of making it a High Key, Low Key or "just" a regular image has great impact about the mood that this picture will convey. While High Key images are considered happy and will show your subject as a tooth-paste poster; Low Key portraits are dramatic and convey allot of atmosphere and tension. Let's explore those two dramatic lighting alternatives.
LOW-KEY LIGHTING
In filming or videotaping for television, the lighting of a scene so that there is a great deal of contrast between dark and light areas, making artistic use of deep shadows. Low-key lighting is used effectively in dramatic presentations to create variety and establish mood, particularly in mysteries or thrillers.
In other words, the Low Key design uses very little fill light, creating strong contrasts between the brightest and darkest parts of an image and often creating strong shadows that obscure parts of the principal subjects. Low key light shows the contours of an object by throwing areas into light or shadow while the fill light provides partial illumination in the shadow areas to prevent a distracting contrast between bright and dark. For dramatic effects, one may wish the contrast to be high — to emphasize the brightness of the sun in a desert scene, to make a face look rugged; the lighting scheme is often associated with "hard-boiled" or suspense genres such as film noir. Here are some examples from Touch of Evil (Orson Welles, 1958.)
Costume
The clothes that characters wear, signify character, or advertise particular fashions, or to make clear distinctions between characters. In this example from Life on Earth (La Vie sur Terre, 1998) long loose clothes, big hats are used to further stress the cultural and psychological implications of a nomadic existence, the difference between the coldness of France and the colourful poverty of Mauritania.
Acting
There is enormous historical and cultural variation in performance styles. Acting by British actors is very subtle as compared to Hindi serials early styles were melodramatic, and then came the naturalistic style.
Typage
Typage refers to the selection of actors on the basis that their facial or bodily features readily convey the truth of the character the actor plays. The filmmakers thought that the life-experience of a non-actor guaranteed the authenticity of their performance when they attempted a dramatic role similar to their real social role.
A stereotype in communicating the essential qualities of a character. It is still visible nowadays but the selection is made from professional actors. In this scene, In Pudovkin's Storm Over Asia (Potomok Chingis-Khana, USSR, 1928), professional and non-professional actors are used alike. The cast was selected not on terms of their skills or reputation, but on their physical resemblance to the following types:
Example
1. The pompous and greedy general
2. The hero of the Mongol people
3. The general's wife with royal ambitions
Space
It includes depth, proximity, size and proportions of the places and objects in a scene can be manipulated through camera placement and lenses, lighting, decor, effectively determining mood or relationships between elements
Deep space
A film utilizes deep space when significant elements of an image are positioned both near to and distant from the camera. For deep space these objects do not have to be in focus, a defining characteristic of deep focus.
This particular scene is taken from the Iranian film ‘The Colour of Paradise’ (Rang-e Khoda, 1999). It was used to integrate the characters into their natural surroundings, to map out the actual distances involved between one location and another in order to emphasize just exactly how hard it is for a particular character (especially children) to move from one place to another.
For example, as per the composition of the still from ‘The Colour of Paradise’, Mohammad's father (the character on the horse) looks in apprehension at the school where his blind son is visiting. In the far background, Mohammad is playing with his sister and other "normal" children, but his father does not believe Mohammad should try to mingle with them since he could never be their equal, due to his disability. On the other hand, Mohammad enjoys the company of his new friends in the countryside much more than the School for the Blind in Tehran. The distance between the two points of view, as well as the impossibility of communication between Mohammad and his father is reflected in the deep use of Mise-en-scène.
At this point it will be appropriate to explain how this deep space may be achieved. The trick lies in Deep Focus. Deep focus is a photographic and cinematographic technique incorporating a large depth-of-field. Depth-of-field is the front-to-back range of focus in an image — that is, how much of it appears sharp and clear. Consequently, in deep focus the foreground, middle-ground and background are all in focus. This can be achieved through knowledgeable application of the hyper focal distance of the camera lens being used.
The opposite of deep focus is shallow focus, in which only one plane of the image is in focus.
In the cinema, Orson Welles and his cinematographer Gregg Toland were the two individuals most responsible for popularizing deep focus. Their film Citizen Kane (1941) is a veritable textbook of possible uses of the technique.
Frontality refers to the staging of elements, often actors, so that they face the camera. Frontal staging is usually avoided by editing because it would seem as if the actors were aware of the camera, it would break the illusion. Some films have the characters speak to the camera, in what is called a direct address. This is done in the remake of the Movie ‘Alfie’, where the character, Alfie, converses with the audiences. He refers to them the directly.
For example,
In the still from ‘The Stendhal Syndrome’ (La Sindrome di Stendhal, Italy, 1996) Dario Argento exploits the iconicity of frontal staging in multiple ways.
• First, the characters are situated on a parallel plane with the famous profile portraits of The Duke of Urbino and his wife by Piero Della Francesca.
• Then, they are flattened; the space between them and the paintings is made shallow by the use of telephoto lens, while keeping all planes in focus.
• Frontality is used to equate the characters with the paintings: both are fictional creations, the product of an artist's work.
• As a pun, the Japanese tourist taking a picture of the viewers.
The shallow space
It is the opposite of deep space; the image is shown with very little depth. The objects can appear plastered on. The figures in the image occupy the same or closely positioned planes. While the resulting image loses realistic appeal, its flatness enhances its pictorial qualities. Striking graphic patters can be obtained.
In the stills from ‘My Neighbour Totoro’ (Tonari No Totoro, Japan, 1988) the entire background is filled with a lamp-eyed, grinning cat bus. The Shallow space creates ambiguity: is the cat brimming with joy at the sisters' encounter, or is he about to eat them?
Shallow space can be staged, or it can also be achieved optically, with the use of a telephoto lens. This is particularly useful for creating claustrophobic images, since it makes the characters look like they are being crushed against the background.
A Matte shot is a shot in which two photographic images (usually background and foreground) are combined into a single image using an optical printer. Matte shots can be used to add elements to a realistic scene or to create fantasy spaces. In the example from Vertigo (1958), director Alfred Hitchcock adds the white belfry, a model, on to the shot of the roof; in the second image, the sky in the background is clearly a painting, with the purpose of making us believe the scene takes place on a bell tower's top floor, rather than on the studio's ground.
Matte shooting is one of the most common techniques used in studio filmmaking, it's cheaper to shoot a picture of the Eiffel tower than to travel to Paris or because it would be impossible or too dangerous to try to shot in the real space. Special effects and computer generated images have taken over the function of matte shots nowadays.
Off screen Space
Space that exists in the fictional world but that is not visible in the frame. Off-screen space becomes significant when the viewer's attention is called to an event or presence world that is not visible in the frame. Off-screen space is commonly used in suspense in horror and thriller films, such as The Stendhal Syndrome.
The Three Production Phases
The production process refers to the stages required to complete a media product (film, video, television and audio recording), it starts right from the idea to the final copy. The production process is commonly broken down into preproduction, production, and postproduction.
This chart belongs to a small production unit; however there are lots of other people involved. This depends on the scale of production and the nature of the program to be produced.
Pre-production
Pre-production refers to the tasks undertaken before production begins. It is the most important phase of entire production because it affects the execution stage directly. It is the foundation of the whole production process. It is in the preproduction that all the basic ideas and approaches have to mature into a final blue print for the production, as well as the post-production stage.
Program idea:
In order for the program to be successful, throughout each production phase knowledge of the needs, interests, and general background of the target audience is very important because the program should have value and leave a lasting effect on the viewer, the content must affect the audience emotionally for them to remember it, similar productions from the past are referred too, to avoid making new mistakes. The differences in time, locations, and audiences are taken into consideration. The research about the subject has to be done in depth.
Capturing and Holding Viewer Attention: Viewers have thousands of channels to choose from. The content of the program must appeal to the viewers to make them want too tune in and watch episodes of it time and time again. They should want to make a choice and select it from numerous programs offered by various channels.
First, Get Their Attention!
The success of a TV show will depend s on the TRP ratings besides attracting new viewers it is essential to sustain the general viewer ship, if nothing interesting is communicated the audiences will go elsewhere for entertainment.
Hit the Target
Production must begin with a clear understanding of the needs and interests of the target audience. After this is done it is important to schedule the program as per the audiences. On TV movies meant for 18 plus viewers are shown after 11 o’clock. Cartoons on channel 33 were shown from two – four o’ clock
Because that is the time children would return from school.
Using Audience-Engaging Techniques
It is a well know fact that audiences react emotionally to program content. Surprisingly even a logical, educational presentation would garner an emotional response. The audiences must be emotionally engaged at all times. They must be given new insights and be exposed to new points of view. The content that reinforces their existing attitudes is readily accepted, they tend to react against ideas that run contrary to their beliefs. The viewers should never be alienated.
For example, an East Coast TV station did an exposé on a local police chief. An undercover reporter put a camera in a lunch box and filmed the police chief clearly taking a bribe. When the segment was broadcast, there was negative reaction against the TV station because the police chief was popular with many influential people in the community. The people refused to believe what they saw because thy disagreed to it.
A script is a form of literature written by a playwright or a scriptwriter, almost always consisting of dialogue between the characters in drama, intended for theatrical, cinematic, or television performance rather than ‘reading’. The term is often used in contrast to "musical," which refers to a script with a lot of music and singing.
Drafting the script
The idea is formulated into a script. This is the first or the draft version of the script. Numerous versions will subsequently follow. Throughout, the rewriting process, a number of story conferences or script conferences typically take place. During these sessions,
• Audience appeal,
• Pace,
• Rhythm, and
• Problems with special interest groups will be looked into.
If it's an institutional production, production's goals will be reviewed and the most effective ways to present ideas will be investigated. If the director is on board at this time he or she should be part of these conferences.
Preliminary planning
The discussion of production interpretation takes place. The initial set designs are made sketches and rough plans are made. Make-up, Cast, Locations, Lighting and other technicalities are discussed. The overall value of the production to a sponsor or underwriter is determined. Since, finance and returns are important issues, thorough investigation of audience profile takes place. Generally, the larger the audience the more marketable a production will be to an underwriter or advertiser. Many Broadcasters have cancelled more than one TV series not because it had a small audience, but because it had targeted the wrong audience.
It is at this stage that the potential value of a production to an advertiser or underwriter with the projected cost of producing and presenting the production is balanced. It is ensured that the costs do not exceed the benefits. In commercial television, the ROI is generally in the form of increased sales and profits. It can include the expected moral, political, spiritual, or public relations benefit derived from the program.
A TV Budget Summery Sheet
Since, detailed projection has to be undertaken, it is but natural to develop a detailed budget at this stage. The accounting shall be later done as per this budget. The expense heads are prepared and resources allocated. Some amount of buffer is maintained to deal with uncertainties. Generally, the margin of 10 -15 percent is allocated for this purpose. Care should be taken not to Under-budget one’s product under pressure as this will result in generation of insufficient funds effecting successful completion of the project. On the other hand, inappropriate extra padding leads to over-budgeting, which in turn evokes the threat of never going into product as the project appears to be expensive.
A typical budget summary sheet for a television product will start with:
• Name of the program
• Name of the Production House / company
• Number of episodes
• Number of built in commercial breaks
You may clearly see the heading under which a typical summary sheet may exhibit its expenses.
The Previous Funding or the Provisional funding is the extra amount to be kept in to meet uncertainties.
Development refers to the money to be spent on development of the idea e.g. rough drafts and scriptwriting
Production: expenses that will be undertaken in production. Further,
Post Production: cost is also reflected.
Development /script
Concept of rights – Authors etc., the copyright. First and foremost their permission has to be sought. Money has to be paid as fees for the idea to be used.
Research- the expenses incurred to research the political climate and other such factors this is done at every stage.
Sundry expense refer to small necessities like fuses
The other expenses are common sense.
The budget expenses can be broadly categorised into two:
1. above-the-line, and
2. Below-the-line.
Above-the-Line refers to the expenses generally relating to the performing and producing, creative elements: talent, script, music, and others. While, Below-the-line elements refer to two broad areas:
1. The physical elements such as sets, props, make-up, wardrobe, graphics, transportation, production equipment, studio facilities, and editing
2. The technical personnel such as stage manager, engineering personnel, video recording operators, audio operators, and general labour.
Renting vs. Buying Equipment
Equipment rental is also included. Except for studio equipment that's used every day, it's often more economical to rent equipment rather than buy it.
When buying equipment one pays for overheads as well, insurance has to be maintained on it , storage of the equipment becomes very important it has to be given for servicing plus technical know how of the equipment becomes essential . Capital blocking is involved that means money returns are only obtained when it is used; depreciation on the equipment cannot be ignored. In short purchasing equipment may prove to be a liability.
• First, production equipment especially cameras and recorders are likely to become outdated quickly. Even though the camera might still be reliable after five years or more, compared to the newer models it will be outdated. It may even be difficult to find spare parts.
• In countries like U.S. renting provides an income tax advantage. When equipment is purchased, it must be depreciated (written off on income tax) over a number of years.
• When you rent equipment, you increase the opportunities to obtain equipment that will meet the specific needs of your production.
• Purchasing equipment can generate pressure to use it even though, at times, other makes and models might be better suited to your needs
Determining costs:
Approaches to Attributing Costs
Once the cost of a production is worked out, it needs to be justified, either in terms of cost-effectiveness or expected results. If the production is suppose to last for a year there will be certain days when not a single inch of footage is exposed. Further, depreciation will have to be accounted for all the props and property. There are three bases on which to measure cost effectiveness:
1. Cost per Minute: Cost per minute is relatively easy to determine; simply divide the final production cost by the duration of the finished product. For example, if a 30-minute production costs $ 120,000 (US), the cost per minute is $ 4,000 (US).
2. Cost per Viewer: Cost per viewer is also relatively simple to figure out; divide the total production cost by the actual or anticipated size of audience.
In the field of advertising, CPM (or cost-per-thousand) is a common measure. If 100,000 people see a show that costs $ 5,000 (US) to produce, the CPM is $ 50 (US). On a cost-per-viewer basis, this comes out to be only five cents per person.
1. Cost per Measured Results: If after airing one 60-second commercial 300,000 boxes of chocolate are sold at a resulting profit of $ 100,000 (US). If a million dollars was spent in producing and airing the commercial, question that would arise whether it was good investment. But, advertisers air most advertisements more than once. If the cost of TV time is $ 10,000 and we sell 300,000 packages of chocolates after each airing, we will soon show a profit.
Return on Investment
It’s very difficult to determine the effectiveness of programming on altering human behaviour and attitudes. It is difficult to quantify the return on investment of public service announcements designed to get viewers to stop smoking, before and after surveys are conducted to measure changes in public awareness, it can be almost impossible to measure the impact of other factors.
Final script - Finally, a script version emerges. Even this version, will probably not be final. In many instances, scene revisions continue right up to the time the scenes are shot. A storyboard is developed; a storyboard consists of drawings of key scenes with corresponding notes on elements such as dialogue, sound effects, and music.
A screenplay or script is a written plan, authored by a screenwriter, for a film or television program. Screenplays can be original works or adaptations from existing works such as novels. A screenplay differs from a script in that it is more specifically targeted at the visual, narrative arts, such as film and television, whereas a script can involve a blueprint of "what happens" in a comic, an advertisement, a theatrical play and other "blueprinted" creations.
The major components of a screenplay are action and dialogue, with the "action" being "what we see happening" and "dialogue" being "what we hear" (i.e., what the characters utter). The characters, when first introduced in the screenplay, may also be described visually. Screenplays differ from traditional literature conventions in ways described below; however, screenplays may not involve emotion-related descriptions and other aspects of the story that are, in fact, visual within the end-product.
• Screenplays in print are highly formal, conforming to font and margin specifications designed to cause one page of screenplay to correspond to approximately one minute of action on screen;
• thus screen directions and descriptions of location are designed to occupy less vertical space than dialogue,
• And various technical directions, such as settings and camera indication are set apart from the text with capital letters and/or indentation.
• Professional screenplays are always printed in 12-point Courier, or another fixed-width font that appears like typewriter type.
Developed during the pre-production stage and used throughout the production and post-production stages, a storyboard is a series of diagrams that are used to depict the composition of a video or film segment. Each diagram consists of: a sketch of the image; a brief description of the visual; notes for the camera operator; the details of the desired audio that will accompany the visual; and an estimate of how long the segment will be. The storyboard cards are then placed in order to provide the foundation for capturing the proper footage and for making the correct editing decisions. Here is an example of what a storyboard card might look like.
In short, Storyboards are graphic organizers such as a series of illustrations or images displayed in sequence for the purpose of pre-visualizing a motion graphic or interactive media sequence, including website interactivity.
The storyboarding process, in the form it is known today, was developed at the Walt Disney studio during the early 1930s, after several years of similar processes being in use at Walt Disney and other animation studios.
To better understand the usefulness of the storyboard, here is additional information on basic camera shots, examples of transitions, and explanations of different graphic insert methods.
Basic Camera Shots
As part of your Shot Description, you might want to use the following terms (or abbreviations): extreme close-up (ECU), close-up (CU), medium shot (MS), long shot (LS), and extreme long shot (ELS). Below are some examples of each of these. Keep in mind that people do have different scales that they follow in defining these camera shots. It is up to the director of your production to ensure that the camera operators are capturing the proper footage.
What is Shot Description?
This will contain a description of what the director will be instructing the camera operator to capture on tape. This will help to explain details that cannot be shown by a single sketch. You might decide to use some of the terms used to identify the basic camera shots, such as, extreme close up, medium shot, or long shot in conjunction with your descriptions. If these camera shot terms are new to you, go to the Basic Shot Selection section to see examples.
Extreme Close-up (ECU or XCU): a shot composition that shows the fine details of a subject. An extreme close-up shot is relative to what is considered a close-up shot.
Close-up (CU): a shot composition that captures only a small portion of a subject. A close-up shot is relative to what is considered a medium shot and an extreme close-up shot.
Medium shot (MS): a shot composition that shows about half of the complete subject. A medium shot is relative to what is considered a close-up shot and a long shot.
Long shot (LS): a shot composition that captures most (if not all) of the subject. A long shot is relative to what is considered a medium shot and an extreme long shot.
You might also describe specific camera movement techniques in your shot description. A couple of examples might be a pan or a zoom. Sometimes you might want the camera operator to pan across a scene, or zoom in on a subject. There are many other techniques, and the idea is to clearly state what kind of shot you are going for. Be careful in your shot selections because different combinations can convey different messages to the viewers.
Transitions
When you edit your segments together, how you switch from one segment to the next is called a transition. You do not need special editing equipment to incorporate transitions between your shots. You can create many of your own transitions using well planned techniques using the features of the video camera.
As part of your storyboard, you may want to plan for specific transitions into or out of a segment. Here is a list of some of the transitions that you can consider for your video production.
Simple Cut: This will be your most common transition since all it requires during editing is to stop or start the scene where it is convenient. This is effective for quickly changing settings or perspectives.
Black: Editing in a second or two of "black" can help to provide a distinct break between scenes. When you watch television, you may notice that many times when a program goes to commercial, there is a moment of black just before a commercial starts. This helps the audience to know that the commercial is not part of the program.
Fade-in/Fade-Out: Many video cameras have a fade button. Using this feature helps to show the audience that the scene has changed or will change. The fade usually starts or ends with black.
Refocus/Defocus: If your camera doesn't have a fade feature, you could use a technique where you start or end a scene out of focus. You may notice that many times when television programs display text information on the screen, they usually produce it over a defocused scene. Of course, to do this, you must learn how to manually focus your video camera first.
Follow a moving object: To transition into a scene, you can try to record footage where you follow an object or person (that is not the main focus of the scene) moving across until you stop the camera and stay focused on your intended subject. From there you can record any scripted dialog. This helps to avoid having everything jump out at your audience as you transition between scenes.
Be careful in what you select as your transitions in and transitions out, particularly as you go from one "out" to an "in." If the transition out on one shot is a "medium shot, cut" you typically don't want to follow with a similar "cut, medium shot" on the transition in of the next shot. If you have the same type of shots following each other, you need to further examine the details of the segments to make sure that this similarity won't confuse the audience.
There are many more ways that you can transition, and the above examples are just a few to help get you started.
Graphics Insert
Sometimes it helps to show important points with a graphical visual aid. With matching examples and verbal descriptions, you can be sure that your audience will know that you just made an important statement because you backed it up with a graphical image. For example, if you want students to remember key points, you can present a graphic that has the points numbered and sequenced.
On the other hand, you can take advantage of certain graphics to help add information so that you do not necessarily have to have someone verbalizing all the details.
Editor Created Titles: The linear editing system does have the capability to add titles to a video segment. This is an easy way to produce title screens on colored backgrounds, or to add captions to enhance certain images. You might want to add a caption that shows the name of the person that the viewer is seeing. Keep in mind, however, that the built-in title functions are nothing like word processors and you may find your options rather limiting.
Self Created Graphics: If you don't like the appearance of the characters from the editing system's title feature, you might consider creating your own graphics. You could hand draw titles or graphics, or create them on a computer and print them in colour or on colour paper. These could then be shot with the video camera. You won't be able to superimpose any of these graphics with another video image, but it does provide you with more flexibility.
Computer Generated Graphics (recorded to tape): To take the computer generated graphics one step further, you could utilize the video output function of certain computers to record your graphics directly to videotape. With certain computer hardware and software solutions you can record complex animation or stylish scrolling credits. You could also use presentation software such as Microsoft PowerPoint to present important points in graphic format. Once you have segments recorded on tape, you can then use that part as your raw footage that is then edited onto your master tape.
Technical Planning
A tentative schedule is drawn up this is dictated by the broadcast or distribution deadlines, Selection of Key Production Personnel takes place
Above-the-line production personnel are selected. In addition to the producer and writer, above-the-line personnel include the production manager, director and, in general, key creative team members. Below-the-line personnel, generally assigned later, include the technical staff. Staging designs are planned in detail, Locations are decided arrangements are made for graphics special effects, the required paperwork begins
Locations are decided if shooting is not done in the studio, key locations are decided on a location scout or location manager can be hired to find and coordinate the use of the locations suggested by the script. In certain cities they encourage TV and film production and maintain film commissions that supply photos and videotapes of interesting shooting locations in their area. They'll also provide information on usage fees and the names of people to contact. Changes may be made in the on-location settings. For instance, rooms may have to be repainted or redecorated and visible signs changed. Construction of the scene properties and graphics begins inserts of audio visuals are kept ready
Rehearsal script
All the props costumes and models are obtained, the lack of equipment is realised, necessary clearance is obtained, junior talents and shot timings are taken into consideration as well. Once the Talent, Wardrobe and Sets are decided, negotiations will take place and contracts will be signed. The rehearsal script implies one of the many stages of final script which is still undergoing changes before being settled as final.
Pre studio Rehearsals
Wardrobe selection is done. These are suggested by the script, coordinated with the look of the sets and locations, and ultimately approved by the director. After a set designer is hired, he or she will review the script, possibly do some research, and then discuss initial ideas with the director, sketches of the sets can be made for final approval before actual set construction starts the sketches may be given to a computer artist.
Rehearsals are scheduled, from initial table readings to the final dress rehearsal. Even though personnel may not have finished the sets at this point, the actors can start reading through the script with the director to establish pace, emphasis, and basic blocking (the positioning of sets, furniture, cameras, and actors).
Once the sets are finished, the final blocking and dress rehearsals can get underway. Decisions on the remaining staff and production needs are made. At this point key technical personnel, equipment, and facilities are arranged. This includes the rental of both equipment and production facilities.
And so are transportation, catering and on-location accommodations
Unions, which may or may not be involved, often set minimum standards for transportation, as well as the quality of meals and accommodations. Union contracts also cover job descriptions and specific crew responsibilities and working hours, including graduated pay increases for overtime hours, a dry un is done that is rehearsals without lights followed by a staggered and a dress rehearsal that is a completed rehearsal
it's not possible just to go to the location of choice Except for spot news and short documentary segments, access permits, licenses, security bonds, and insurance policies must be arranged Many semi-public interior locations, such as shopping malls, require filming permits. Depending on the nature of the production, liability insurance and security bonds may be necessary, because accidents happen and can be directly and indirectly attributed to the production.
Arrange to shoot or acquire videotape and film inserts, still photos, and graphics. To reduce production costs, existing stock footage in film and tape libraries around the country are checked. This is generally background footage, such as general exterior scenes of an area, to be edited into the production. Decisions on music at this point, including working out copyright clearances and royalties for music and visual inserts are made.
Camera Script – as per the script the show format is finalised breakdown sheets are prepared camera cards cue cards and prompters are made ready. The cameras script contains the shots to be taken the stage set ups the script starts from left to right, the camera card is prepared it contains the information regarding which camera takes the shots and when the size and angles of the shots.
If suitable footage is not available or does not meet the needs of the production, one may need to hire a second unit to produce needed segments.
Second unit work is production done away from the main location by a separate production crew. It generally does not involve the principal, on-camera talent.
Pre Studio Rehearsals
Are in fact rehearsals done before arriving at the studio. This takes place through table reading to rehearsal scripts till final script is evolved. It is during this time that other very important work is undertaken. During this stage of Pre-production:
1. Wardrobe selection is done. In fact, the wardrobe is suggested by the script itself. However, the costume designer may have his or her own interpretation.
2. Wardrobe is also coordinated with the look of the sets and locations, and ultimately approved by the director.
3. After a set designer is hired, he or she will review the script, possibly do some research, and then discuss initial ideas with the director.
4. Set Designer prepares sketches of the sets for final approval before actual set construction starts. The sketches may be given to a computer artist.
5. Rehearsals are scheduled, from initial table readings to the final dress rehearsal. Even though personnel may not have finished the sets at this point, the actors can start reading through the script with the director to establish pace, emphasis, and basic blocking (the positioning of sets, furniture, cameras, and actors).
6. Once the sets are finished, the final blocking and dress rehearsals can get underway.
7. Decisions on the remaining staff and production needs are made.
8. At this point key technical personnel, equipment, and facilities are arranged. This includes the rental of both equipment and production facilities.
9. And so are transportation, catering and on-location accommodations. Unions, which may or may not be involved, often set minimum standards for transportation, as well as the quality of meals and accommodations. Union contracts also cover job descriptions and specific crew responsibilities and working hours, including graduated pay increases for overtime hours,
10. A dry run is done that is rehearsals without lights followed by
11. A staggered and
12. A dress rehearsal that is a completed rehearsal
During this stage of Pre-production various other aspects of production are addressed:
• It’s not possible just to go to the location of choice. Except for spot news and short documentary segments, access permits, licenses, security bonds, and insurance policies must be arranged
• Many semi-public interior locations, such as shopping malls, require filming permits.
• Depending on the nature of the production, liability insurance and security bonds may be necessary, because accidents happen and can be directly and indirectly attributed to the production.
• Arrange to shoot or acquire videotape and film inserts, still photos, and graphics.
• To reduce production costs, existing stock footage in film and tape libraries around the country are checked. This is generally background footage, such as general exterior scenes of an area, to be edited into the production.
• Decisions on music at this point, including working out copyright clearances and royalties for music and visual inserts are made.
Camera Script: In a single-camera shoot, the director can easily direct the video-grapher for each shot or sequence of shots. However, in multiple-camera shoots, the director should prepare a camera script, which s/he issues to each camera operator.
The camera script outlines for each camera operator the exact size of shot and precise camera moves the director expects throughout the scene. In live, as-live, or multi-camera studio productions, in which the director has largely worked out when s/he will cut from one camera to another, a single camera script detailing each operator's moves will work well. On a shoot where all cameras are expected to record the entire scene, each operator should have his/her individual camera script.
The directions can be very simple: maintain a medium close-up on Rico throughout the scene. Or they can involve complex shot combinations and camera moves, with each movement cued by dialogue, movement or lighting changes, and marked by the director on the camera script.
The camera script will result from final shot breakdown sheets and will lead to tele-prompters cards. In simple words, the camera script contains the shots to be taken.
Prepare studio: Refers to undertaking all activities from cleaning to rigging of lights, sound equipment to testing equipment for the shoot. It includes Camera blocking adjustments regarding the camera equipment, lightening, audio, makeup, sets, Run through. The floor is sanitised the systems are made hot.
Camera blocking: The process of notating the changing position of the camera, lens size, and focus during a particular scene. It is made adjusting style lighting and video tape. Once the rigging is done things are put in the right place the characters entry and exit on the set is checked it is called blocking.
Run through: helps the production team to check for problems not only that exist but that may occur. The phrase means that check list is executed and loose ends tied. In short Run Through refers to quick rehearsal.
Final rehearsal- the talent dresses in the appropriate wardrobe, and all production elements are in place. This is the final opportunity for production personnel to solve whatever production problems remain.
Run through: helps the production team to check for problems not only that exist but that may occur. The phrase means that check list is executed and loose ends tied. In short Run Through refers to quick rehearsal.
Final rehearsal- the talent dresses in the appropriate wardrobe, and all production elements are in place. This is the final opportunity for production personnel to solve whatever production problems remain.
The Crew
People involved in this stage include the director, the producer, the scriptwriter, the researcher, the set designer, the make-up artist, and the costume designer.
Producer: a television producer helps to coordinate, the financial, legal, administrative, technological and artistic aspects of a production.
* Associate producer: Performs limited ‘producing’ functions under the authority of a producer; often in charge of the day-to-day running of a production. Usually, he or she is the producer's head assistant, although the task can differ. They are frequently a connection between everyone making shooting possible (the production team) and the people involved after filming to finalize the production, and get it publicized (the post-production team).
* Assistant Producer (AP): An Assistant Producer often doubles as an experienced Researcher, and takes direct charge of the creative content and action within a programme.
* Co-ordinating Producer: Coordinates the work of two or more producers working separately on one or more productions.
* Co-Producer: Typically performs producing functions in tandem with one or more other co-producers They work as a team, rather than separately on different aspects of the production.
* Executive Producer: Supervises one or more producers in all aspects of their work; sometimes the initiator of the production; usually the ultimate authority on the creative and business aspects of the production, the executive producer would arrange for the project's financial backing and attempt to maintain a well budgeted production. Far too often, the executive producer's role is given falsely to a power player in the equation - sometimes an actor, an actor's agent, or someone else who aided in the production of the project.
* Line Producer: Supervises the physical aspects of the production including personnel, technology, budget, and scheduling. The line producer oversees the project's budget. This involves operating costs such as salaries, production costs, and everyday equipment rental costs. The line producer works with the production manager on costs and expenditure.
* Segment Producer: Produces one or more components of a multi-part production.
* Supervising Producer: Supervises one or more producers in some or all aspects of their work; usually works under the authority of an executive producer.
The Crew…
People involved in this stage include the director, the producer, the scriptwriter, the researcher, the set designer, the make-up artist, and the costume designer.
Director: A television director is usually responsible for directing the actors and other filmed aspects of a television production. His role differs from that of a film director because the major creative controls are likely to be under the purview of a producer. In general, the actors and other regular artists on a show will be familiar enough with their roles that the director's input will be confined to technical issues. The director is responsible for all creative aspects of a movie. The director would most likely assist with hiring the cast.
He helps decide on the locations, creates a plan of shooting, and sets a mental layout of shot by shot in his minds eye. During shooting, the director supervises the overall project, manages shots, and keeps the assignment on budget, and schedule. Although the director holds much power, he is second in command after the producer, who ultimately hired him. Some directors are also the producers of their program, and, with the approval of the funding studio, have a much tighter grip on what makes the final cut than directors usually have.
Researcher: Researchers research the project ahead of shooting schedule to increase truth, factual content, creative content, original ideas, background information, and sometimes performs minor research pertaining to flight details, location conditions, accommodation details, etc. It is their task to inform the director, producer, and writer of all ideas, and knowledge related to what task is being undertaken, or what a scene / event / prop / or backdrop needs to be included to make the show factual and ultimately more believable.
Writer: The writer creates and moulds an original story, or adapts other written, told, or acted stories for production of a television show. Their finished work is called a script. A script may also have been a contribution of many other co-writers. Writers can also come under the category of screenwriters. Screenwriters, or script writers, are authors who write the screenplays from which productions are made just as a writer does. Many of them also work as "script doctors," attempting to change scripts to suit directors or studios.
Make-up Artist: A professional make up artist is usually a beautician, and applies make-up to anyone appearing on screen. They concentrate on the area above the chest, the face, the top of the head, the fingers, hands, arms, and elbows. They manipulate an actor's ‘on-screen appearance’ whether it makes them look more youthful, larger, older, or in some cases monstrous.
Production Designer: The production designer is the person with the responsibility of the visual appearance of a production. They design, plan, organize, and arrange set design, equipment availability, as well as the on-screen appearance that a production will have. The set designer is responsible for collaborating with the theatre director to create an environment for the production and then communicating the details of this environment to the technical director, scenic artist and props master.
Scenic Designers: are responsible for creating scale models of the scenery as well as scale drawings. The set designer also takes instructions from the art director to create the appearance of the stage, and design its technical assembly. The art director, who can also be the production designer, plans and oversees the formation of settings for a project. They are fully aware and conscious of art and design styles, including architecture and interior design. They also work with the cinematographer to accomplish the precise appearance for the project.
Costume designer: The costume designer makes all the clothing and costumes worn by all the actors on screen, as well as designing, planning, and organizing the construction of the garments down to the fabric, colours, and sizes. They greatly contribute to the appearance of the film, and set a particular mood, time, and feeling. They alter the overall appearance of a project with their designs and constructions, including impacting on the style of the project, and how the audience interpret the shows characters.
The crew (all of them are below the line)
People mentioned here are generally brought in for the principal photography stage. They are involved in this stage of production include the cinematographer, production manager, the technical director, the boom operator, the gaffer, the dolly grip, and the key grip.
Floor Manager
The Floor Manager is the Director's representative on the studio floor, and is responsible for giving instruction and direction to crew, cast and guests. It is closest to the role of an Assistant Director, as the job frequently entails giving orders to keep a production moving to schedule. The Floor Manager is always in direct contact with the Director via talkback in the gallery.
Assistant Floor Manager
An Assistant Floor Manager is responsible for setting a stage and prompting contributors on the studio floor and ensuring that everyone knows their place in the script, freeing the Floor Manager for other duties. They often oversee a team of Runners. Increasingly, Assistant Floor Managers are being asked to assist with the design and preparation of props, as well as setting and resetting the action on the studio floor.
Camera operator
As the head member of the camera crew, the camera operator uses the camera as coached by the director. They are accountable for maintaining the required action is correctly filmed in the frame, and needs to react instinctively as the proceedings take place. He does not physically move the cameras but just operates it .guarantees that the visual appearance of the project follows to the director’s initial foresight. However, the cinematographer would usually not manoeuvre the camera on the set, as this is usually the exclusive role of a camera operator.
Production manager
The production manager performs deals concerned with business about the crew, and organizes the technical needs of the production. This would involve many things ranging from gaining the correct equipment with the exact technical requirements; to arranging accommodation for the cast and crew. The production manager reports their expenses and needs to the line producer. He reports to both the producer and the director but what ever the producer says is always considered first
Technical director
In a production control room (PCR), the technical director has overall responsibility for the operations. The technical director is responsible for the proper working of all the equipment in the PCR. They also match the quality and the output of all the cameras on the studio floor through the camera control units. It is their responsibility to supervise all the other crew members working in the PCR. The technical director also coordinates the working of the whole crew and looks into any technical problem which arises before, during or after the shooting of a project.
Cinematographer
the cinematographer records that image on film, translating the director's ideas and creating the atmosphere and the look of the film. the cinematographer often spends hours there after shooting, checking the negative. WORKE WITH A camera team (often consisting of a director of photography, cameraman, and assistant cameraman) shares the responsibilities. AND LABORATORY TECNICIAN - exact framing, sometimes for screens of more than one type.
-They also must decide upon the use of masking, the choice of lens, the camera angle, and the control of camera movement. They must either keep the focus sharp or put all or part of the picture out of focus if this effect is required. THEY control slow motion or accelerated motion. Many effects require the actors to perform against a background of previously prepared film. The cinematographer must be in command of all these processes.
Lighting director
Lighting Directors make extensive preparations before recording days, including script reading and taking part in discussions about the style required. Planning meetings or reconnaissance are usually held, involving the Director and heads of department including the Production Designer, Costume Designer, Make up Designer, Sound Supervisor and Camera Supervisor. They discuss in detail the logistics of the production, and resolve any conflicts. Lighting is influenced by a wide range of factors, including the script, the director's requirements, set design, location, camera shots, costumes, sound, and the available equipment. Following the planning meeting, Lighting Directors may prepare a lighting plan (or plot) which provides information about the position, type and colour of all the lights to be used. They work closely with the Gaffer, who organises any required extra equipment and power supplies. Lighting Directors oversee the set-up and operation of the lights, by instructing a team of Sparks on the studio floor, and the Lighting Console Operator who controls studio lighting effects, using equipment in the gallery (technical area). During recordings or live transmissions, any final adjustments are made as and when required.
Boom operator
The boom operator is an assistant of the sound engineer or "sound mixer". The main responsibility of the boom operator is microphone placement, sometimes using a "fish pole" with a microphone attached to the end and sometimes, when the situation permits, using a "boom" (most often a "fisher boom") which is a special piece of equipment that the operator stands on and that allows precise control of the microphone at a much greater distance away from the actors. They will also place wireless microphones on actors when it is necessary. The boom operator is part of the sound crew, who manages to keep the microphone boom, near to the action, but away from the camera frame, so that it never appears onscreen, but allows the microphone to pursue the actors as they move. They work closely with the production sound mixer, or sound recordist, to record all sound while filming including background noises, dialogue, sound effects, and silence. he is responsible for what effect to use when.
Someone else holds the boom ensures that the shadows don’t show in the shot
Gaffer
The gaffer is the head electrician at the production set, and is in charge of lighting the stage in accordance with the direction of the cinematographer. In television the term chief lighting director is often used instead of gaffer, and sometimes the technical director will light the studio set. The gaffer reports to the Director of Photography (DoP), Lighting Director (LD) or Lighting Designer, and will usually have an assistant called a Best Boy and a crew of rigging electricians.
Dolly grip
In cinematography, the dolly grip is the individual who places and moves the dolly track were it is required, and then pushes and pulls the dolly along that track while filming. A dolly grip must work closely with the camera crew to perfect these complex movements during rehearsals. For moving shots, dolly grips may also push the wheeled platform holding the microphone and boom operator. The dolly is a cart that the tripod and camera (and occasionally the camera crew) rest on. It makes the camera able to move without bumps and visual interruptions from start to finish while the camera is filming. It is commonly used to follow beside an actor to give the audience the sense of walking with the actor, or as the actor.
Key grip
The key grip is the head grip on the production set. It is a grips task to create shadow effects with lights and occasionally move camera cranes, dollies and platforms while receiving direction from the cinematographer. Grips can also be the people that do the laborious work on sets. These types of grips push, pull, roll, and lift various pieces of equipment under the watchful eye of the television director, producer, or art director.
Runner
Runners are the most junior members of a television crew. They are responsible for fetching and carrying work of a production. Their role is usually to support anyone who needs help in a variety of ways, until such time as they have learned enough to assume more responsibilities.
Best boy
He works in the elec. Dept places the plugs in the sockets.
Spot boy
He is at everyone's beck and call e.g. tea
Production: Production refers to that part of the process in which footage is recorded.
• The production phase is also known as principal photography.
• The goal of principal photography is to record all required shots, Pick-up shots may be required when a mistake is noticed, a script change is made, or even if a performance is deemed to be unsatisfactory.
• Productions can be broadcast either live or recorded. With the exception of news shows, sports remotes, and some special-event broadcasts, productions are typically recorded for later broadcast or distribution.
• Recording the show or program segment provides an opportunity to fix problems by either making changes during the editing phase or stopping the recording and redoing the segment.
The videotaping mode is determined to undertake production. There are three approaches to video-tape any type of production:
1. SINGLE CAMERA (Film Style): Here a single camera is used to record all the footage from every angle imaginable. Scenes and Sequences have to be done over and over again to allow the camera operator to reposition the camera to capture the different angles. The best takes were then assembled during editing and made to look as if they were shot with multiple cameras. Productions that are shot single-camera, film-style are rehearsed and recorded one scene at a time.
2. MULTI-CAMERA STYLE: This increases the editing time and decreases the shooting time, multiple cameras eliminate the need for multiple takes of the same scene. You simply position each camera at a different angle, and shoot the scene once. For example, if the scene included two characters at a bar talking then the one camera may be used to shoot close-ups and mid-shots of character 1, second camera may be used to shoot character 2 in close-ups and mid-shots. In fact the third camera may be brought in to shoot establishing shots and two-shots. 1hot can be an establishment shot to show the place the next ones could be close ups. However, most multi-camera approaches, when not used for LIVE or LIVE-on-Tape situations, are used when the action can not be repeated or it is economically unfeasible to re-stage the action using single camera approach.
3. MULTI-CAMERA STYLE (LIVE): Instead of each camera feeding to a separate videotape, all cameras are edited live through the use of a switcher or mixer feeding to a single VTR. This requires the least amount of editing. Productions shot live-on-tape will need to be completely rehearsed before recording starts. This includes early walk-through rehearsals, camera rehearsals, and one or more dress rehearsals.
Zoom and Prime Lenses
Zoom lenses came into common use in the early 1960s. Before then TV cameras had lenses of different focal lengths mounted on a turret on the front of the camera, (the turret is like the turret from a gun). Each lens had to be rotated into position and focused when the camera was not on the air. Today, most video cameras use zoom lenses. the effective focal length of a zoom lens can be continuously varied, taking it from a wide-angle to a telephoto perspective. This is accomplished through numerous glass elements in it that are, each precisely ground, polished and positioned, which can be repositioned to change the magnification of the lens. As the lens is zoomed, groups of these lens elements must move independently at precise speeds.
Prime lens - These lenses that have one (fixed) focal length. Prime lenses also come in more specialized forms -- super wide-angle, super telephoto, super-fast, etc.
Motorized Zoom Lenses
• Also, called servo-controlled lenses, it has an inbuilt motor with over 200 -500 elements.
• This enables operation to be in sync with slight pressure application.
• They provide a smooth zoom at varying speeds.
• Manually (non-motorised) controlled zoom lenses are often preferred for sports coverage because they can be adjusted much faster between shots).
• The zoom & focusing operations are though undertaken with a help of a motor, however they can also be manually operated, eliminating need of a power source.
Focal length will determine if you get the shot you want. That is how much will you see and with what effect.
See what a difference focal length can make in an image.

Angle of Horizontal Acceptance (Angle of View): The angle of view is simply the angle from which light rays can pass through the lens to form an image on the photo-sensitive material. The angle of view of a camera depends on the focal length of the photographic lens projecting the image.
Angle of view is usually measured one of three ways:
1. Horizontally (from the left to right edge of the frame)
2. Vertically (from the top to bottom of the frame)
3. Diagonally (from one corner of the frame to its opposite corner)
N.B.: Angle of Acceptance for any lens is directly associated with lens focal length. The longer the focal length (in millimetres) the narrower the angle of view (in degrees).
The diagram shows angles of view for different prime lenses. (it represents an 8mm lens in a VHS format)
• A telephoto lens has a narrow angle of view. the angles at the top of the drawing from about 5 to about 10 degrees would be considered in the telephoto range.
• The wide-angle range for this lens is represented (from about 45 to 90 degrees).
• The normal angle of view range lies in between telephoto and wide angle. that’s roughly 48.7degrees Ultra wide-angle lenses, also known as fisheye lenses, cover up to 180° (or even wider in special cases)
• Super Telephoto lenses generally cover between 8° through less than 1°
Zoom lenses are a special case wherein the focal length, and hence angle of view, of the lens can be altered mechanically without removing the lens from the camera.
• Longer lenses magnify the subject more, apparently compressing distance and (when focused on the foreground) blurring the background because of their shallower depth of field. Wider lenses tend to magnify distance between objects while allowing greater depth of field.
• Another result of using a wide angle lens is a greater apparent perspective distortion when the camera is not aligned perpendicularly to the subject: parallel lines converge at the same rate as with a normal lens, but converge more due to the wider total field. For example, buildings appear to be falling backwards much more severely when the camera is pointed upward from ground level than they would if photographed with a normal lens at the same distance from the subject, because more of the subject building is visible in the wide-angle shot.
• Because different lenses generally require a different camera–subject distance to preserve the size of a subject, changing the angle of view can indirectly distort perspective, changing the apparent relative size of the subject and foreground.
When the focal length of a lens is doubled, the size of an image on the target is also doubled. So the camera in the same position, a short focal lens creates a wide view and a long focal length creates an enlarged image in the camera.
A concern in using different focal length lenses at different distances is the relative amount of background area that will be included in the picture.
The angle of horizontal acceptance for a human eye is around 48.7 degrees. For a 35 mm film format, the 50 mm lens roughly has a similar angle of acceptance as a human eye. This is why a 50mm lens is considered to be a normal lens for 35 mm format.
• 16 mm format’s normal angle lens is of 25 mm focal length.
• 70 mm format’s normal angle lens is of 100 mm focal length.
• In 35 mm format, all lenses with a focal length shorter than 50mm are considered wide-angle
• In 35 mm format, all lenses with a focal length greater than 50mm are considered telephoto.
• These lenses can run the spectrum from ultra-wide-angle to super-telephoto depending on their focal lengths.
Telephoto lenses can be used to draw attention to a specific subject, where wide angle lenses can be used to show the vastness or expanse of a scene.
Lens Focal Length differences affect more than just the size of the image on the camera's target -- or in the case of a motion picture camera, the film's surface area. Also affected are:
• The apparent distance between objects in the scene
• The apparent speed of objects moving toward or away from the camera
• The relative size of objects at different distances
Manipulating Distance
A long focal length lens coupled with great camera-to-subject distance appears to reduce the distance between objects in front of the lens.
• Illustration, in the centre, illustrates differences in the camera-to-subject distance.
• The woman, fountain, and the rest elements of the scene are not changed. In other words, all of the objects and subject distances remain the same for both the pictures.
• But, the distance of fountain (in the background) from the subject (woman), in the picture on the right appears to be much closer to than in the picture on the left.
• It is important to note that only by manipulating of focal length and the subject to camera distance, we have achieved compression or exaggeration of space & distance.
N.B.: To compensate for this difference and keep the size of the woman about the same in each picture, the photographer used different focal length lens:
1. A wide-angle lens with a short focal length for the first photo, and
2. A telephoto with a long focal length for the second.
In the setting above, if we used the wide-angle lens while standing at the distance used for the telephoto picture on the right, the woman would obviously end up being rather small in the setting. But let's assume we enlarged the section of the image with the woman to make her equal in size to the image of telephoto lens (the photo on the right above).
The result (although probably grainy and blurry due to great enlargement) would have the same fountain-to-woman distance perspective as the photo on the right.
The type of lens that we decide to use when recording a visual can have an enormous impact on the way that objects appear in the final photograph, in relation to one another, even if those objects remain stationary.
The relationship between an object in the foreground and an object in the background is called perspective. Because, telephoto lenses can compress the distance between a foreground and background object, and have a very small field of view (narrow angle of acceptance), they can drastically alter our perspective in a photograph. Even when the objects in a scene do not move, a photographer can use a wide angle lens to show a background object as distant and small, or a telephoto lens to show a background object as near and towering. The lens used to take the picture can set the mood for the picture.

Changes in the Apparent Speed of Objects
Change in camera-to-subject distance and choice of lens focal length effects the apparent speed of subject / object moving toward or away from the camera, in addition to affecting the apparent spatial relationship.
• By physically moving the camera away from the subject and using a long focal length lens (or a zoom lens used at its maximum focal length), slows down the apparent speed of objects moving toward or away from the camera.
• E.G.: Filmmakers often use this technique to create an effect. For instance, in The Graduate, (Mike Nichols, 1967), Dustin Hoffman runs down a street toward a church. The camera with a very long focal length lens conveys what he's feeling: although he's running as fast as he can, it seems as if he's hardly moving. Both he and the audience fear he won't make it to the church on time, thus, increasing the dramatic tension in the story.
• Conversely, moving close to the subject matter with a wide-angle lens increases (exaggerates) the apparent speed of objects moving toward or away from the camera.
• You can easily visualize why. If you were standing on a distant hilltop watching someone run around a track or, perhaps, traffic on a distant roadway, they would seem to be hardly moving. It would be like watching with a long focal length lens. But stand right next to the track or roadway (using your visual wide-angle perspective), the person or traffic would seem to whiz by.
Perspective Changes: The use of a wide-angle lens combined with a limited camera-to-subject distance creates a type of perspective distortion.
• If a video-grapher uses a short focal length lens shooting a tall building from street level, the parallel lines along the sides of the building appear to converge toward the top. (Note the photo on the left.) At this comparatively close distance, the building appears to be leaning backward.
• Compare the above photo (using a wide-angle lens) with the photo (centre) taken from a much greater distance with a normal focal length lens.
• You get even more distortion using an extreme wide-angle lens when you get very close to subjects. (Note the photo on the right-top). The solution -- assuming this is not the effect you want -- is to move back and use the lens at a normal-to-telephoto setting.
• (Here's another example of perspective distortion.) Note the convergence of lines in the photo of the video switcher (right-below). A close camera distance coupled with a wide-angle lens setting makes the rows in the foreground look much farther apart than those in the background.
• Again, you can eliminate this type of distortion by moving the camera back and using a longer focal length lens.
Camera moves
Pan: It shows what's to the left or right of the screen
• reveals the setting
• sweep across subject wider than screen
• show relationship between two subjects.
• The camera moves horizontally (left or right). A tripod is used for a smooth effect. This is done to follow a subject or show the distance between two objects. Pan shots work great for panoramic views.
Tilt: It refers to moving the camera up or down without raising its position. It shows what's above or below the screen, to reveal parts of vertical subject, useful for showing tall objects, show relationship between parts of a subject, can add suspense or surprise.
Pedestal: Not tilting, but physically moving the height of the camera up or down, usually on a tripod. One has to pedestal the camera up or down according to the height you preferred.
Truck: When a camera physically moves to left or right around subject, to show another side of subject, add dimension, show physical relationship between objects or subjects.
Dolly: The Camera physically moves toward or away from subject. The camera is set on tracks or wheels and moved towards or back from a subject. It’s a train track contraption used for a dolly shot or a device attached to a tripod. It has large wheels, rolls smoothly, has a seat for a video-grapher, and works quite well as a dolly, but you can also use a rolling cart or even a skateboard. , to change focus of attention from broad view to detail of subject or vice versa. Provides sense of physically moving closer or farther from subject.
Zoom: View of subject changes from tight to wide or wide to tight using the zoom control on the lens. The camera does not move. This is done to change the focus of attention from broad view to detail of subject or vice versa, keep size of moving subject the same in frame, reveal surroundings of subject (zoom out). Provides sense of magnifying subject without getting physically closer (zoom in).
Zoom vs. Dolly: Another way to alter what the camera sees is to actually move (dolly) the camera toward or away from a subject. It is often assumed that the effect will be the same as zooming however the perspective only changes when the camera is moved because When you zoom you optically enlarge smaller and smaller parts of the picture to fill the screen. When you dolly a camera you physically move the entire camera toward or away from subject matter. The view of the central and surrounding subject matter would change it would be as if you were to walk toward or away from it.
Zoom Ratio: It is used to define the focal length range for a zoom lens. If the maximum range through which a particular lens can be zoomed is 10 mm to 100 mm, it's said to have a 10:1 (ten-to-one) zoom ratio (l0 times the minimum focal length of 10mm equals 100mm).
but with this designation one still don't know what the minimum and maximum focal lengths are. A10:1 zoom lens could have a 10 to 100mm, or a 100 to 1,000mm lens, and the difference would be quite dramatic.
To address this issue, we refer to the first zoom lens as a 10 X 10 (ten-by-ten) and the second as a 100 X 10. The first number represents the minimum focal length and the second number the multiplier. So a 12 X 20 zoom lens would have a minimum focal length of 12mm and a maximum focal length of 240mm.
A zoom lens with a 200:1 zoom ratio, the type often used for network sports have ratios of 70:1 and less. With a 70:1 zoom lens a camera covering a football game could zoom out and get a wide shot of the field, and then by zooming in, fill the screen with a football sitting in the middle of the field.
In photography, shutter speed is the length of time a shutter is open; the total exposure is proportional to this exposure time, or duration of light reaching the film or image sensor.
Factors that affect the total exposure of a visual frame include the scene luminance, the aperture size (f-number), and the exposure time (shutter speed); photographers can trade off shutter speed and aperture by using units of stops. A stop up and down on each will halve or double the amount of light regulated by each; exposures of equal exposure value can be easily calculated and selected. For any given total exposure, or exposure value, a fast shutter speed requires a larger aperture (smaller f-number). Similarly, a slow shutter speed, a longer length of time, can be compensated by a smaller aperture (larger f-number). In short, it is inversely proportional relationship.
Slow shutter speeds are often used in low light conditions, extending the time until the shutter closes, and increasing the amount of light gathered. This basic principle of photography, the exposure, is used in film and digital cameras, the image sensor effectively acting like film when exposed by the shutter.
Shutter speed is measured in seconds. A typical shutter speed for photographs taken in sunlight is 1/125th of a second. In addition to its effect on exposure, shutter speed changes the way movement appears in the picture. Very short shutter speeds are used to freeze fast-moving subjects, for example at sporting events. Very long shutter speeds are used to intentionally blur a moving subject for artistic effect.
Adjustment to the aperture controls the depth of field, the distance range over which objects are acceptably sharp; such adjustments generally need to be compensated by changes in the shutter speed.
In early days of photography, available shutter speeds were somewhat ad hoc. Following the adoption of a standardized way of representing aperture so that each major step exactly doubled or halved the amount of light entering the camera (f/2.8, f/4, f/5.6, f/8, f/11, f/16, etc.), a standardized 2:1 scale was adopted for shutter speed so that opening one aperture stop and reducing the shutter speed by one step resulted in the identical exposure. The agreed standards for shutter speeds are:
1/1000 s
1/500 s
1/250 s
1/125 s
1/60 s
1/30 s
1/15 s
1/8 s
1/4 s
1/2 s
1 s
Each standard increment either doubles the amount of light (longer time) or halves the amount of light (shorter time). For example, if you move from 1 sec to 1/2 second, you have effectively halved the amount of light entering the shutter. This scale can be extended at either end in specialist cameras. Some older cameras use the 2:1 ratio at slightly different values, such as 1/100 s and 1/50 s, although mechanical shutter mechanisms were rarely precise enough for the difference to have any significance.
The term "speed" is used in reference to short exposure times as fast, and long exposure times as slow. Shutter speeds are often designated by the reciprocal time, for example 60 for 1/60 s.
Camera shutters often include one or two other settings for making very long exposures:
B (for bulb) — keep the shutter open as long as the shutter release is held
T (for time) — keep the shutter open until the shutter release is pressed again
The ability of the photographer / cinematographer / videographer to take images without noticeable blurring by camera movement is an important parameter in the choice of slowest possible shutter speed for a handheld camera. The rough guide used by most 35 mm photographers is that the slowest shutter speed that can be used easily without much blur due to camera shake is the shutter speed numerically closest to the lens focal length.
For example, for handheld use of a 35 mm camera with a 50 mm normal lens, the closest shutter speed is 1/60 s. This rule can be augmented with knowledge of the intended application for the photograph, an image intended for significant enlargement and close-up viewing would require faster shutter speeds to avoid obvious blur. Through practice and special techniques such as bracing the camera, arms, or body to minimize camera movement longer shutter speeds can be used without blur. If a shutter speed is too slow for hand holding, a camera support — usually a tripod — must be used. Image stabilization can often permit the use of shutter speeds 3-4 stops slower (exposures 8-16 times longer).
Shutter priority refers to a shooting mode used in semi-automatic cameras. It allows the photographer to choose a shutter speed setting and allow the camera to decide the correct aperture. This is sometimes referred to as Shutter Speed Priority Auto Exposure, or TV (time value) mode.
The ability of the photographer / cinematographer / videographer to take images without noticeable blurring by camera movement is an important parameter in the choice of slowest possible shutter speed for a handheld camera. The rough guide used by most 35 mm photographers is that the slowest shutter speed that can be used easily without much blur due to camera shake is the shutter speed numerically closest to the lens focal length.
For example, for handheld use of a 35 mm camera with a 50 mm normal lens, the closest shutter speed is 1/60 s. This rule can be augmented with knowledge of the intended application for the photograph, an image intended for significant enlargement and close-up viewing would require faster shutter speeds to avoid obvious blur. Through practice and special techniques such as bracing the camera, arms, or body to minimize camera movement longer shutter speeds can be used without blur. If a shutter speed is too slow for hand holding, a camera support — usually a tripod — must be used. Image stabilization can often permit the use of shutter speeds 3-4 stops slower (exposures 8-16 times longer).
Shutter priority refers to a shooting mode used in semi-automatic cameras. It allows the photographer to choose a shutter speed setting and allow the camera to decide the correct aperture. This is sometimes referred to as Shutter Speed Priority Auto Exposure, or TV (time value) mode.
Creative Control Using Shutter Speed: The shutter speed can make the difference between good video and poor video; it can separate the amateurs from the professionals. Unlike the shutters used in still cameras, the shutter used in most video cameras is not mechanical. "shutter" speeds simply represent the time that the light-induced charge is allowed to electronically build in the chip before the cycle is repeated.
This results in a much brighter video, If there is a need to stop action, faster shutter speeds than normal can be selected. Most professional video cameras have a series of shutter speeds from of 1/60 second (normal), to 1/2,000th second. Many go beyond this to 1/5,000, 1/10,000th, and, as we've noted, even 1/12,000 second.
Increase in the shutter speed, reduces the exposure. To compensate, the iris of the lens must be opened up. The higher speeds (1/1,000th and above) make possible clear slow-motion playbacks and freeze-frame still images, such as the one we see here.
Shutter Speeds And F-Stop: there is a direct relationship between shutter speeds and f-stops. Each of the combinations in the table below represents the same exposure, the shutter speed is doubled the lens must be opened up one f-stop to provide the same net exposure. In other words the increased shutter speed cuts the exposure time in half, but opening the iris one f-stop lets twice as much light through the lens to compensate.
When shooting a scene may need an exposure of 1/100 at f/4 instead of f/11, then the shutter speed and f-stop relationships will shift accordingly, the light sensitivity of the camera should be kept in mind because that affects the shutter speed/f-stop scale.
Shutter Speed and Stroboscopic Effect: A stroboscopic effect (where you see a rapid sequence of discrete images associated with movement) can occur in video cameras with very high (above 1/250th second) and very low (below 1/60th second) shutter speeds.
Low Shutter Speed: Just like a time exposure in still photography, it's possible in some video cameras, as we've noted, to use exposure rates below the normal 1/60th second. This allows the effect of the light to build in the CCD beyond the normal scanning time.
High Shutter Speed: When shutter speed intervals shorter than 1/250th second are used, action tends to be cleanly frozen into crisp, sharp, still images. Without the slight blur that helps smooth out the transition between successive frames, we may notice a subtle stroboscopic effect when we view rapid action. Even so, the overall effect is to make images clearer, especially for slow-motion playbacks.

The Techniques of production
Lenses:
The Basics
The choice of camera lenses has a major effect on how subject matter will be seen by a viewer. It provides the cameraperson with creative power. The focal length of a lens affects the appearance of subject matter in several ways.
Lens Focal Length: Focal length is the distance from the optical centre of the lens to the focal plane (target) of the video camera when the lens is focused at infinity. (Any object in the far distance). Since the lens-to-target distance for most lenses increases when the lens is focused on anything closer than infinity. Focal length is generally measured in millimetres. With fixed focal length or prime lenses the focal length cannot be varied.
The Lens speed is the maximum amount of light that can pass through the lens on to the target (photo sensitive media). For example, owls can see better in low light conditions as compared to humans, thus we can say that the speed of their eye lens is faster than our eyes. Their lenses allow more light in to the retinal area.
Like the pupil of an eye which controls (automatically) amount of light despite varying light levels, the iris of the camera lens controls the amount of light passing through the lens. Under low light conditions, the pupils of our eyes dilates to allow in maximum light, in bright sunlight, the pupil constrict to avoid overloading the light-sensitive rods and cones in the back of the eye.
In the same way, the amount of light falling on the light-sensitive target of a TV camera must be controlled with the aid of an iris in the middle of the lens. Too much light will overexpose and wash out the picture; too little will cause the loss of detail in the darker areas.
The various specific numerical settings through which an iris can be adjusted (from a small to a large opening) are known as f-stops. The "f" stands for factor. An f-stop is the ratio between the lens opening and the lens focal length. More specifically, the f-stop equals the focal length divided by the size of the lens opening.
f-stop = focal length / lens opening
the smaller the f-stop number the more light the lens transmits.
Opening the iris one f-stop (from f/22 to f/16, for example) represents a 100 percent increase in the light passing through the lens. Conversely, "stopping down" the lens one stop (from f/16 to f/22, for example) cuts the light by 50 percent.
Knowing this is important So the lens iris can be adjusted to compensate for a picture either too light or too dark. The other f-stops, such as f/1.2, f/3.5, and f/4.5 are mid-point settings between whole f-stops, and on some lenses they represent the maximum aperture (speed) of the lens.
Remember, as we increase or decrease the F stop setting corresponding change in depth of field takes place.
In optics, particularly as it relates to film and photography, the depth of field (DOF) is the portion of a scene that appears sharp in the image. Although a lens can precisely focus at only one distance, the decrease in sharpness is gradual on either side of the focused distance, so that within the DOF, the un-sharpness is imperceptible under normal viewing conditions.
For some images, such as landscapes, a large DOF may be appropriate, while for others, such as portraits, a small DOF may be more effective.
The DOF is determined by the subject distance, the lens focal length, and the lens f-number (relative aperture). Except at close-up distances, DOF is approximately determined by the subject magnification and the lens f-number. For a given f-number, increasing the magnification, either by moving closer to the subject or using a lens of greater focal length, decreases the DOF; decreasing magnification increases DOF. For a given subject magnification, increasing the f-number (decreasing the aperture diameter) increases the DOF; decreasing f-number decreases DOF.
When focus is set to the ‘hyper focal’ distance, the DOF extends from half the ‘hyper focal’ distance to infinity, and is the largest DOF possible for a given f-number.
The advent of digital technology in photography has provided additional means of controlling the extent of image sharpness; some methods allow DOF that would be impossible with traditional techniques, and some allow the DOF to be determined after the image is made.
Additional information for students:
Depth of focus DOF is a lens optics concept that measures the tolerance of placement of the image plane (the film plane in a camera) in relation to the lens. While the phrase depth of focus was historically used, and is sometimes still used, to mean depth of field, in modern times it is more often reserved for the image-side depth. Depth of field is a measurement of depth of acceptable sharpness in the object space, or subject space. Depth of focus, however, is a measurement of how much distance exists behind the lens wherein the film plane will remain sharply in focus. It can be viewed as the flip side of depth of field, occurring on the opposite side of the lens. Where depth of field often can be measured in macroscopic units such as meters and feet, depth of focus is typically measured in microscopic units such as fractions of a millimetre or thousandths of an inch. Since the measurement indicates the tolerance of the film's displacement within the camera, depth of focus is sometimes referred to as "lens-to-film tolerance."
The same factors that determine depth of field also determine depth of focus, but these factors can have different effects than they have in depth of field. Both depth of field and depth of focus increase with smaller apertures. For distant subjects (beyond macro range), depth of focus is relatively insensitive to focal length and subject distance, for a fixed f-number. In the macro region, depth of focus increases with longer focal length or closer subject distance, while depth of field decreases.
In small-format cameras, the smaller circle of confusion limit yields a proportionately smaller depth of focus. In motion picture cameras, different lens mount and camera gate combinations have exact flange focal depth measurements to which lenses are calibrated.
The choice to place gels or other filters behind the lens becomes a much more critical decision when dealing with smaller formats. Placement of items behind the lens will alter the optics pathway, shifting the focal plane. Therefore, often this insertion must be done in concert with stopping down the lens in order to compensate enough to make any shift negligible given a greater depth of focus. It is often advised in 35 mm motion picture filming not to use filters behind the lens if the lens is wider than 25 mm.
A rough formula often used to quickly calculate depth of focus is the product of the focal length times the f-stop divided by 1000; the formula makes most sense in the case of normal lens (as opposed to wide-angle or telephoto), where the focal length is a representation of the format size. The precise formula for depth of focus is two times the f-number times the circle of confusion times the quantity of one plus the magnification factor. However, the magnification factor depends on the focal length and format size and exact focus the lens is set to, which can be difficult to calculate. Therefore, the first formula is often used as a guideline, as it is much easier to calculate. It relies on the historical convention of circle of confusion limit equal to focal length divided by 1000, which is deprecated in modern photographic teachings, in favour of format size (for example, along the diagonal) divided by 1000 or 1500. See the article circle of confusion.
What is hyper focal distance?
In optics and photography, hyper focal distance is a distance beyond which all objects can be brought into an "acceptable" focus. There are two commonly used definitions of hyper focal distance, leading to values that differ only slightly:
1. The first definition: the hyper focal distance is the closest distance at which a lens can be focused while keeping objects at infinity acceptably sharp; that is, the focus distance with the maximum depth of field. When the lens is focused at this distance, all objects at distances from half of the hyper focal distance out to infinity will be acceptably sharp.
2. The second definition: the hyper focal distance is the distance beyond which all objects are acceptably sharp, for a lens focused at infinity.
The distinction between the two meanings is rarely made, since they are interchangeable and have almost identical values. The value computed according to the first definition exceeds that from the second by just one focal length.
What is hyper-focal focussing?
Hyper focal focusing is a technique which yields the maximum depth of field for a given combination of f stop and lens focal length.
What is hyper-focal distance?
It is a distance from your camera that you focus at to maximise the depth of field.
Why do I need it?
Old camera lenses with fixed focal length lenses had a scale for determining depth of field. Modern SLR cameras come equipped with zoom lenses and auto focusing. Because depth of field varies with focal length or more correctly image scale, manufacturers cannot put a simple scale on a zoom lens.
How do I find the hyper-focal distance?
It is a mathematical quantity that can be calculated from a formula. Failed maths? Relax. Tables for most common focal lengths and f-stops used on 35mm cameras are provided with this article.
How do I apply the hyper-focal distance?
Look up the table for your lens focal length and f-number used. Set your lens to focus at that distance. Everything from about half of that distance to infinity will be in focus. You must set the lens focus to the hyper focal distance setting manually.
In optics, a circle of confusion, (also known as disk of confusion, circle of indistinctness, blur circle, etc.), is an optical spot caused by a cone of light rays from a lens not coming to a perfect focus when imaging a point source.
The depth of field is the region where the size of the circle of confusion is less than the resolution of the human eye (or of the display medium). Circles with a diameter less than the circle of confusion will appear to be in focus.
Two important uses of this term and concept need to be distinguished:
1. To calculate a camera's depth of field (“DoF”), one needs to know how large a circle of confusion can be considered to be an acceptable focus. The maximum acceptable diameter of such a circle of confusion is known as the maximum permissible circle of confusion, the circle of confusion diameter limit, or the circle of confusion criterion, but is often incorrectly called simply the circle of confusion.
2. Recognizing that real lenses do not focus all rays perfectly under even the best of conditions, the circle of confusion of a lens is a characterization of its optical spot. The term circle of least confusion is often used for the smallest optical spot a lens can make, for example by picking a best focus position that makes a good compromise between the varying effective focal lengths of different lens zones due to spherical or other aberrations. Diffraction effects from wave optics and the finite aperture of a lens can be included in the circle of least confusion, or the term can be applied in pure ray (geometric) optics.
In idealized ray optics, where rays are assumed to converge to a point when perfectly focused, the shape of a mis-focused spot from a lens with a circular aperture is a hard-edged disk of light (that is, a hockey-puck shape when intensity is plotted as a function of x and y coordinates in the focal plane). A more general circle of confusion has soft edges due to diffraction and aberrations, and may be non-circular due to the aperture (diaphragm) shape. So the diameter concept needs to be carefully defined to be meaningful. The diameter of the smallest circle that can contain 90% of the optical energy is a typical suitable definition for the diameter of a circle of confusion; in the case of the ideal hockey-puck shape, it gives an answer about 5% less than the actual diameter.
In photography, the circle of confusion diameter limit (“CoC”) is sometimes defined as the largest blur circle that will still be perceived by the human eye as a point when viewed at a distance of 25 cm (and variations thereon).
With this definition, the CoC in the original image depends on three factors:
• Visual acuity. For most people, the closest comfortable viewing distance, termed the near distance for distinct vision (Ray 2002, 216), is approximately 25 cm. At this distance, a person with good vision can usually distinguish an image resolution of 5 line pairs per millimetre (lp/mm), equivalent to a CoC of 0.2 mm in the final image.
• Viewing conditions. If the final image is viewed at approximately 25 cm, a final-image CoC of 0.2 mm often is appropriate. A comfortable viewing distance is also one at which the angle of view is approximately 60° (Ray 2002, 216); at a distance of 25 cm, this corresponds to about 30 cm, approximately the diagonal of an 8″×10″ image. It often may be reasonable to assume that, for whole-image viewing, an image larger than 8″×10″ will be viewed at a distance greater than 25 cm, for which a larger CoC may be acceptable.
• Enlargement from the original image (the focal plane image on the film or image sensor) to the final image (print, usually). If an 8×10 original image is contact printed, there is no enlargement, and the CoC for the original image is the same as that in the final image. However, if the long dimension of a 35 mm image is enlarged to approximately 25 cm (10 inches), the enlargement is approximately 7×, and the CoC for the original image is 0.2 mm / 7, or 0.029 mm.
All three factors are accommodated with this formula:
1. CoC Diameter Limit (mm) = anticipated viewing distance (cm) / desired print resolution (lp/mm) for a 25 cm viewing distance / anticipated enlargement factor / 25. For example, to support a print resolution equivalent to 5 lp/mm for a 25 cm viewing distance when the anticipated viewing distance is 50 cm and the anticipated enlargement factor is 8: CoC Diameter Limit = 50 / 5 / 8 / 25 = 0.05 mm. Since the final image size is not usually known at the time of taking a photograph, it is common to assume a standard size such as 25 cm width, along with a conventional final-image CoC of 0.2 mm, which is 1/1250 of the image width. Conventions in terms of the diagonal measure are also commonly used. The DoF computed using these conventions will need to be adjusted if the original image is cropped before enlarging to the final image size, or if the size and viewing assumptions are altered.
2. Using the so-called “Zeiss formula” the circle of confusion is sometimes calculated as d/1730 where d is the diagonal measure of the original image (the camera format). For full-frame 35 mm format (24 mm × 36 mm, 43 mm diagonal) this comes out to be 0.024 mm. A more widely used CoC is d/1500, or 0.029 mm for full-frame 35 mm formats, which corresponds to resolving 5 lines per millimetre on a print of 30 cm diagonal. Values of 0.030 mm and 0.033 mm are also common for full-frame 35 mm formats. For practical purposes, d/1730, a final-image CoC of 0.2 mm, and d/1500 give very similar results.
3. Angular criteria for CoC have also been used. Kodak (1972) recommended 2 minutes of arc (the Snellen criterion of 30 cycles/degree for normal vision) for critical viewing, giving CoC ≈ f / 1720, where f is the lens focal length. For a 50 mm lens on full-frame 35 format, this gave CoC ≈ 0.0291 mm. Angular criteria evidently assumed that a final image would be viewed at “perspective-correct” distance (i.e., the angle of view would be the same as that of the original image): Viewing distance = focal length of taking lens × enlargement. However, images seldom are viewed at the “correct” distance; the viewer usually doesn't know the focal length of the taking lens, and the “correct” distance may be uncomfortably short or long. Consequently, angular criteria have generally given way to a CoC fixed to the camera format.
The common values for CoC may not be applicable if reproduction or viewing conditions differ significantly from those assumed in determining those values. If the photograph will be magnified to a larger size, or viewed at a closer distance, then a smaller CoC will be required. If the photo is printed or displayed using a device, such as a computer monitor, that introduces additional blur or resolution limitation, then a larger CoC may be appropriate since the detectability of blur will be limited by the reproduction medium rather than by human vision; for example, an 8″×10″ image displayed on a CRT may have greater depth of field than an 8″×10″ print of the same photo, due to the CRT display having lower resolution; the CRT image is less sharp overall, and therefore it takes a greater misfocus for a region to appear blurred.
Depth of field formulae derived from geometrical optics imply that any arbitrary DoF can be achieved by using a sufficiently small CoC. Because of diffraction, however, this isn't quite true. The CoC is decreased by increasing the lens f-number, and if the lens is stopped down sufficiently far, the reduction in defocus blur is offset by the increased blur from diffraction. See the Depth of field article for a more detailed discussion.
The larger the f-stop number (that is, the smaller the iris opening), the greater the depth of field. Therefore, the depth of field of a lens set at f/11 is greater than the same lens set at f/5.6, and depth of field at f/5.6 will be greater than at f/2.8. The depth of field extends approximately one-third of the way in front of the point of focus and two-thirds behind it.
Depth of field and focal length: (A wide angled lens has a shorter focal length and a telephoto lens has a longer focal length) The reason a wide-angle lens appears to have a greater depth of field than a telephoto lens is that details and sharpness problems in the image created by the wide-angle lens are compressed. On enlargement a section of image area from the wide-angle shot the depth of field is the same.
That is why wide-angle lenses are good at hiding a lack of sharpness, so they're a good choice when accurate focus is an issue. But by moving in closely to the subject, the sharpness advantage is lost
With a telephoto lens, focus must be much more precise. In fact, when zoomed in fully at maximum focal length, the area of acceptable sharpness may be less than a few inches (20mm or so), especially with a wide aperture (low f-stop number). As a creative tool it can be used to advantage because the eyes tend drawn to sharply focused areas.
In other words, Depth of Field is depth of field is the range of distance in front of the camera that's in sharp focus. If a camera is focused at a specific distance, only objects at that exact distance will be what we might consider sharp, and objects in front of and behind that point will be, to varying degrees, blurry. The areas in front of and behind the point of focus may be acceptably sharp meaning .The term sharp is subjective because the image doesn't abruptly become unacceptably blurry at a certain point in front of or behind the point of focus. The transition from sharp to out of focus is gradual. It may go unnoticed.
Several factors determine whether the objective misfocus becomes noticeable. Subject matter, movement, the distance of the subject from the camera, and the way in which the image is displayed all have an influence.
Depth of field can be anywhere from a fraction of an inch to virtually infinite. For instance, a close-up of a person's face may have shallow DOF (with someone just behind that person visible but out of focus common, for instance, in melodramas and horror films); a shot of rolling hills might have great DOF, with both the foreground and background in focus. A close-up still photograph might employ a very shallow DOF to isolate the subject from a distracting background.
Precise focus is possible at only one distance; at that distance, a point object will produce a point image. At any other distance, a point object is defocused, and will produce a blur spot. When this circular spot is sufficiently small, it is indistinguishable from a point, and appears to be in focus; it is rendered as “acceptably sharp”. The diameter of the circle increases with distance from the point of focus; the largest circle that is indistinguishable from a point is known as the acceptable circle of confusion, or informally, simply as the circle of confusion. The acceptable circle of confusion is influenced by visual acuity, viewing conditions, and the amount by which the image is enlarged. The increase of the circle diameter with defocus is gradual, so the limits of depth of field are not hard boundaries between sharp and unsharp.
Several other factors, such as subject matter, movement, and the distance of the subject from the camera, also influence when a given defocus becomes noticeable.
The area within the depth of field appears sharp while the areas in front of and beyond the depth of field appear blurry.
The image format size also will affect the depth of field. The larger the format size, the longer a lens will need to be to capture the same framing as a smaller format. In motion pictures, for example, a frame with a 12 degree horizontal field of view will require a 50 mm lens on 16 mm film, a 100 mm lens on 35 mm film, and a 250 mm lens on 65 mm film. Conversely, using the same focal length lens with each of these formats will yield a progressively wider image as the film format gets larger: a 50 mm lens has a horizontal field of view of 12 degrees on 16 mm film, 23.6 degrees on 35 mm film, and 55.6 degrees on 65 mm film. What this all means is that because the larger formats require longer lenses than the smaller ones, they will accordingly have a smaller depth of field. Therefore, compensations in exposure, framing, or subject distance need to be made in order to make one format look like it was filmed in another format.
Effect of f-number
For a given subject framing, the DOF is controlled by the lens f-number. Increasing the f-number (reducing the aperture diameter) increases the DOF; however, it also reduces the amount of light transmitted, and increases diffraction, placing a practical limit on the extent to which the aperture size may be reduced. Motion pictures make only limited use of this control; to produce a consistent image quality from shot to shot, cinematographers usually choose a single aperture setting for interiors and another for exteriors, and adjust exposure through the use of camera filters or light levels. Aperture settings are adjusted more frequently in still photography, where variations in depth of field are used to produce a variety of special effects.
Additional Notes for Students:
What is Diffraction?
Diffraction refers to various phenomena associated with the bending of waves when they interact with obstacles in their path. It occurs with any type of wave, including sound waves, water waves, and electromagnetic waves such as visible light, x-rays and radio waves. As physical objects have wave-like properties, diffraction also occurs with matter and can be studied according to the principles of quantum mechanics. While diffraction always occurs when propagating waves encounter obstacles in their paths, its effects are generally most pronounced for waves where the wavelength is on the order of the size of the diffracting objects. The complex patterns resulting from the intensity of a diffracted wave are a result of interference between different parts of a wave that traveled to the observer by different paths.
The hyper focal distance is the nearest focus distance at which the DOF extends to infinity; focusing the camera at the hyper focal distance results in the largest possible depth of field for a given f-number. Focusing beyond the hyper focal distance does not increase the far DOF (which already extends to infinity), but it does decrease the DOF in front of the subject, decreasing the total DOF. Some photographers refer to this as “wasting DOF”; this apparent wastage is referred to as The Object Field Method. In fact, there is a rationale to ‘The object field method’.
If the lens includes a DOF scale, the hyper focal distance can be set by aligning the infinity mark on the distance scale with the mark on the DOF scale corresponding to the f-number to which the lens is set. For example, with the 35 mm lens shown above set to f / 11, aligning the infinity mark with the ‘11’ to the left of the index mark on the DOF scale would set the focus to the hyper focal distance. Focusing on the hyper focal distance is a special case of zone focusing in which the far limit of DOF is at infinity.
The object field method: Traditional depth-of-field formulas and tables assume equal circles of confusion for near and far objects. Some authors, such as Merklinger (1992),[3] have suggested that distant objects often need to be much sharper to be clearly recognizable, whereas closer objects, being larger on the film, do not need to be so sharp. The loss of detail in distant objects may be particularly noticeable with extreme enlargements. Achieving this additional sharpness in distant objects usually requires focusing beyond the hyper focal distance, sometimes almost at infinity. For example, if photographing a cityscape with a traffic bollard in the foreground, this approach, termed the object field method by Merklinger, would recommend focusing very close to infinity, and stopping down to make the bollard sharp enough. With this approach, foreground objects cannot always be made perfectly sharp, but the loss of sharpness in near objects may be acceptable if recognisability of distant objects is paramount.
Moritz von Rohr also used an object field method, but unlike Merklinger, he used the conventional criterion of a maximum circle of confusion diameter in the image plane, leading to unequal front and rear depths of field.
Depth of field can be anywhere from a fraction of a millimetre to virtually infinite. In some cases, such as landscapes, it may be desirable to have the entire image in focus, and a large DOF is appropriate. In other cases, artistic considerations may dictate that only a part of the image be in focus, emphasizing the subject while de-emphasizing the background, perhaps giving only a suggestion of the environment (Langford 1973, 81). For example, a common technique in melodramas and horror films is a close-up of a person's face, with someone just behind that person visible but out of focus. A portrait or close-up still photograph might use a small DOF to isolate the subject from a distracting background. The use of limited DOF to emphasize one part of an image is known as selective focus or differential focus.
Although a small DOF implies that other parts of the image will be un-sharp, it does not, by itself, determine how un-sharp those parts will be. The amount of background (or foreground) blur depends on the distance from the plane of focus, so if a background is close to the subject, it may be difficult to blur sufficiently even with a small DOF. In practice, the lens f-number is usually adjusted until the background or foreground is acceptably blurred, often without direct concern for the DOF.
Sometimes, however, it is desirable to have the entire subject sharp while ensuring that the background is sufficiently un-sharp. When the distance between subject and background is fixed, as is the case with many scenes, the DOF and the amount of background blur are not independent. Although it is not always possible to achieve both the desired subject sharpness and the desired background un-sharpness, several techniques can be used to increase the separation of subject and background.
For a given scene and subject magnification, the background blur increases with lens focal length. If it is not important that background objects be unrecognizable, background de-emphasis can be increased by using a lens of longer focal length and increasing the subject distance to maintain the same magnification. This technique requires that sufficient space in front of the subject be available; moreover, the perspective of the scene changes because of the different camera position, and this may or may not be acceptable.
The situation is not as simple if it is important that a background object, such as a sign, be unrecognizable. The magnification of background objects also increases with focal length, so with the technique just described, there is little change in the recognisability of background objects. However, a lens of longer focal length may still be of some help; because of the narrower angle of view, a slight change of camera position may suffice to eliminate the distracting object from the field of view.
Although tilt and swing are normally used to maximize the part of the image that is within the DOF, they also can be used, in combination with a small f-number, to give selective focus to a plane that isn't perpendicular to the lens axis. With this technique, it is possible to have objects at greatly different distances from the camera in sharp focus and yet have a very shallow DOF. The effect can be interesting because it differs from what most viewers are accustomed to seeing.
Follow Focus
Usually refers to a cameraperson or focus puller operating on a focusing gear to keep A moving subject in focus. A moving subject is likely to move away from focused point, though a good depth of field, may help but the cinematographer / focus puller needs to constantly change focus with changing position of the subject vis-à-vis camera lens. In other words, ‘Follow-focus’ is used to refocus the camera to accommodate subject movement.
However, technically speaking, A follow focus is a piece of equipment that attaches to the focus ring of a manual lens via a set of rods on the body of a film or video camera. It is ergonomic rather than strictly necessary; in other words it does not contribute to the basic functionality of a camera but instead allows the operator to be more efficient and precise. It is usually operated by a focus puller (often called the 1st assistant camera, or 1AC) but some camera operators prefer to pull their own focus (the act of changing focus is called "pulling" or "racking" focus).
A manual lens is usually a requisite for professional filmmaking. This is because auto-focus lens systems use lasers or infrared beams to measure the distance between the lens and the subject. This technique does not anticipate an actor stepping into the foreground of the frame, nor can it focus on anything which is not in the centre of the frame. The job of the focus puller then is to adjust the focus onto different subjects as well as change, or (follow) focus during movement of the camera onto the required subject, hence the term.
The mechanism works through a set of gears on the follow focus that are attached to teeth on the focus ring of the lens. These gears feed to a wheel which when turned by a focus puller who will spin the teeth and thus the ring on the lens. Practically, the device is not necessary as the operator can directly turn the ring on the lens. However, this would place the hand in an awkward position perpendicular to the camera rather than parallel, and turning beyond a certain distance (such as 360 degrees) would be impossible. Sometimes, such a "focus pull" would even be difficult with a follow focus, so an L-shaped metal rod can be attached in the provided square hole at the centre of the wheel. Thus, the hand merely has to spin the rod, which turns the wheel. The stationary white disk surrounding the wheel is used by the focus puller to jot down marks, and take care of the focus according to the marks he/she took during rehearsals. A focus puller often uses a tape to correctly measure the distance from the lens to the subject, allowing for accurate marking of the disk.
A follow focus is usually a compulsory piece of equipment for professional filmmaking, although those with low/no budgets or cameras not equipped with detachable manual lenses will have to do with auto-focus systems or turning a lens ring by hand. To make matters worse, most auto-focus lenses with a focus ring (such as those on most consumer and prosumer camcorders) are not "true" manual focus lenses, meaning that turning the ring does not directly adjust the elements inside the lens but rather actuates the electronics inside the camera which predict how the focus should go depending on how fast or far the ring was turned. These lenses make precise and repeatable focus pulls difficult, and use of a follow focus impractical. They are sometimes called "servo" focus lenses or "focus by wire" cameras.
Racking focus is the practice of shifting the attention of a viewer of a film or video by changing the focus of the lens from a subject in the foreground to a subject in the background, or vice versa. It dated back to the time when cameras did not have reflex lenses so the operator would have to rack focus the camera by looking through the viewfinder then sliding the camera over so that the shot would be in focus.
In the photo, in the first scene the woman (in focus) is sleeping. In the second shot when the phone rings, the focus shifts to the phone (on the right). As she picks up the phone and starts to talk, the focus shifts (racks) back again to bring her into focus.
focus shifts have to be rehearsed so that one can manually rotate the lens focus control from one predetermined point to another.
Sometimes video-graphers temporarily mark the points on the lens barrel with a grease pencil. After locking down the camera on a tripod, they can then shift from one focus point to another as needed.
Auto-focus Lenses
Auto-focus is quite helpful when moving objects have to be captured
Most auto-focus devices assume that the area in sharp focus is in the centre of the picture. The auto-focus area (the area the camera will automatically focus on) is in the green rectangle in this photo. The centre area is correctly focused, but the main subject is blurry. To deal with this problem the camera could be panned or tilted to bring the main subject into the auto-focus area, but the composition would be changed.
While shooting through such things as glass and wire fences, accurate focus cannot be determined
auto-focus devices especially under low light can keep readjusting or searching for focus as you shoot, which can be distracting.
For all these reasons, video-graphers typically turn off auto-focus and rely on their own focusing techniques. The only exception a difficult situation where there is no time to keep moving the subject matter into focus manually.
Deep focus is a photographic and cinematographic technique incorporating a large depth-of-field. Depth-of-field is the front-to-back range of focus in an image — that is, how much of it appears sharp and clear. Consequently, in deep focus the foreground, middle-ground and background are all in focus. This can be achieved through knowledgeable application of the hyper-focal distance of the camera lens being used.
The opposite of deep focus is shallow focus, in which only one plane of the image is in focus.
In the cinema, Orson Welles and his cinematographer Gregg Toland were the two individuals most responsible for popularizing deep focus. Their film Citizen Kane (1941) is a veritable textbook of possible uses of the technique.
Shallow focus is a photographic and cinematographic technique incorporating a small depth of field. In shallow focus one plane of the image is in focus while the rest is out of focus. Shallow focus typically is used to emphasize one part of the image over another. Photographers sometimes refer to the area that is out of focus as ‘bokeh’ (see below for description).
The opposite of shallow focus is deep focus, in which the entire image is in focus. Deep focus photographic technique more closely approximates what is seen by the human eye.
Bokeh (from the Japanese boke ボケ, "blur") is a photographic term referring to the appearance of out-of-focus areas in an image produced by a camera lens. Different lens ‘bokeh’ produces different aesthetic qualities in out-of-focus backgrounds, which are often used to reduce distractions and emphasize the primary subject.
How to:
A shallow focus is achieved by using a big aperture in conjunction with a long lens or a close subject distance. Common lenses used in still photography to achieve a shallow focus are 50/1.4, 85/1.8, and 135/2.
Further, using a camera with a large sensor helps; in still photography, large format inherently has narrower focus than 35 mm, which is still narrower than compact digital cameras. If you can choose what format to use, go with the largest sensor available.
Shallow focus is often used in portraiture, to isolate the subject from the background.
Macro photography is close-up photography; the classical definition is that the image projected on the "film plane" (i.e., film or a digital sensor) is close to the same size as the subject. On 35 mm film (for example), the lens is typically optimized to focus sharply on a small area approaching the size of the film frame. Most 35mm format macro lenses achieve at least 1:2, that is to say, the image on the film is 1/2 the size of the object being photographed. Many 35mm macro lenses are 1:1, meaning the image on the film is the same size as the object being photographed. Another important distinction is that lenses designed for macro are usually at their sharpest at macro focus distances and are not quite as sharp at other focus distances.
In recent years, the term macro has been used in marketing material to mean being able to focus on a subject close enough so that when a regular 6×4 inch (15×10 cm) print is made, the image is life-size or larger. This requires a magnification ratio of only approximately 1:4, more easily attainable by lens makers.
Technical considerations:
Limited depth of field is an important consideration in macro photography. This makes it essential to focus critically on the most important part of the subject, as elements that are even a millimetre closer or farther from the focal plane might be noticeably blurry. Due to this, the use of a microscope stage is highly recommended for precise focus with large magnification such as photographing skin cells.
The problem of sufficiently and evenly lighting the subject can be difficult to overcome. Some cameras can focus on subjects so close that they touch the front piece of glass in the lens. It is impossible to place a light between the camera and a subject that close, making this extreme close-up photography impractical. A normal-focal-length macro lens (50 mm on a 35 mm camera) can focus so close that lighting remains difficult. To avoid this problem, many photographers use telephoto macro lenses, typically with focal lengths from about 100 to 200 mm. These are popular as they permit sufficient distance for lighting between the camera and the subject.
Ring flashes, with flash tubes arranged in a circle around the front of the lens, can be helpful in lighting at close distances. Ring lights have emerged, using white LEDs to provide a continuous light source for macro photography.
The Macro Lens Setting
Look at the picture of the coin .the needle is shot with a telephoto lens, the thread is only in focus. The macro setting that enables the lens to attain sharp focus on an object only a few a few inches, or even a few millimetres from the front of the lens, so the physical distance to the subject can be close, the minimum focusing distance it also short it offers no distortions as compared to the telephoto lens, nowadays the newer lenses are called continuous focus lenses. One can smoothly and continuously adjust these internal focus lenses from infinity to a few inches without manually shifting the lens into macro mode.
A tripod or camera mount is a must in using the macro setting. Not only is depth normally limited to just a few millimetres, but camera movement is greatly exaggerated.
The exact shot framed by the cinematographer can communicate many things to the viewing audience. The framing of a particular shot can communicate power or weakness.
Wide Shot It Shows the whole body or space, to establish scene setting of relations, allow plenty of room for action. In short we can safely say that wide shot establishes location or setting (thereby reveals geography) and introduces action.
Medium Wide Shot it shows most of body or space, it also establishes the geography, it allows room for movement and other subjects to enter frame. In other words, it establishes character and you can follow the character.
Medium Shot It shows subject from waist up. It enables a connection with subject while providing room for gestures. It is the most frequently used shot. It provides an intimate view of subject, focuses attention on salient characteristics. In other words, the medium shot provides new visual information in comparison to wide shots, show a closer view of action, and provide visual variety for editing purposes.
Bust Shot It shows subject from mid-chest. It is also referred to as head and shoulder shot. It basically provides closer view of the character, used as listening or reaction shot, provides standard framing for interviews, and provides visual variety in editing.
Medium Close Up It shows part of subject. It focuses attention to details.
Close Up it shows enlarged view of part of subject. To draws attention to details and adds emotion.
N.B.: for above chart see ‘Grammar Of Television’ from Berger found at http://euphrates.wpunj.edu/faculty/yildizm/SP/also see How Media Products Make Meaning: http://www.uiowa.edu/~centeach/resources/ideas/media.glossary.pdf
CAMERA SHOTS/CINEMATOGRAPHY
The exact shot framed by the cinematographer can communicate many things to the viewing audience. The framing of a particular shot can communicate power or weakness, for example.
Other considerations- One could also notice:
• high and low angle shots
• when the camera moves and why
• the distance between camera and actor or action.
"Often a low angle reinforces the sense that the subject is large or dominant or imposing or powerful, but not always."
With the camera low-shooting UP- it gives the audience the impression that someone is larger, towering, more important or powerful. "Often a low angle reinforces the sense that the subject is large or dominant or imposing or powerful, but not always.” ("Film, An Introduction" William H. Phillips, Part One, The Expressiveness of Film Techniques, pg.8 )
With the camera high-shooting DOWN- it gives the audience the impression that someone is smaller, less significant, helpless, or vulnerable. "...a high angle does not always make the subject (s) seem small, vulnerable, or weak, though in many contexts it does." ("Film, An Introduction" William H. Phillips, Part One, The Expressiveness of Film Techniques, pg. 8 )
Close Ups (primarily faces, signify intimacy) (Media Analysis Techniques (2nd Ed.) Arthur Asa Berger )
Medium Shots (most of body, personal relationship) (Media Analysis Techniques (2nd Ed.) Arthur Asa Berger )
Wide/Long/Establishing Shots (setting & characters; context, scope, public distance)
Follow Action The camera follows subject as they move. May involve panning, tilting, and zooming. This is done to keep the subject in frame and add energy and movement to scene.
Let in/out Here the camera is stationary and subject enters or leaves frame. This is done when the subject has to enter or leave scene, a transition between scenes or subjects. Let in can establish a setting and then bring attention to subject walking into setting. Let out can be used to end a scene.
Let in & Follow The subject enters camera frame and then camera follows moving subject in this way the subject assumes importance and this adds to the drama. It establish a scene and then follow action, change attention from one subject to another, pickup pace of scene and to show transition between subjects.
Shift Attention
- This uses a pan, tilt or combination to change the main subject of a shot from one element to another.
- To shift attention from one element in the frame to another, show physical relationship between subject elements, follow action by changing framing when main action changes between subject elements or show secondary activities happening while main action occurs.
- It is not used in situation where the depth of field is shallow, it is a subtle and creative tool, it is an excellent method to selectively shoe the audience what you want them to see.
Post-production occurs in the making of audio recordings, films/movies, photography and digital art, videos and television programs. It is the general term for all stages of production occurring after the actual recording and ending with the completed work.
Post-production is in fact many different processes grouped under one name. These typically include:
Video editing suite
Editing the picture / TV program
Editing the soundtrack.
Writing and recording the soundtrack music.
Adding visual special effects - mainly computer generated imagery (CGI) and digital copy from which release prints will be made (although this may be made obsolete by digital cinema technologies).
Transfer of film to Video or Data with a telecine and colour corrector.
Typically, the post-production phase of creating a film takes longer than the actual shooting of the film, and can take several months to complete. Other film production stages include (very broadly) - , script development (rewriting), financing, pre-production, the actual shooting and film distribution / marketing.
Editing is an art of storytelling practiced by connecting two or more shots together to form a sequence, and the subsequent connecting of sequences to form an entire movie / television program. Editing is the only art that is unique to cinema / television, and which separates film & Television making from all other art forms that preceded it (such as photography, theatre, dance, writing, and directing). However there are close parallels to the editing process in other art forms such as poetry or novel writing. It is often referred to as the "invisible art," since when it is well-practiced, the viewer becomes so engaged that he or she is not even aware of the work of the editor.
Because almost every motion picture, television show, and TV commercial is shot with one camera per take, every single shot is separated from every other single shot by time and space. On its most fundamental level, video editing is the art, technique, and practice of assembling these shots into a coherent whole. However, the job of an editor isn’t merely to mechanically put pieces of a film together, nor is it to just cut off the film or video slates, nor is it merely to edit dialogue scenes. A film editor works with the layers of images, the story, the music, the rhythm, the pace, shapes the actors' performances, "re-directing" and often re-writing the film or video during the editing process, honing the infinite possibilities of the juxtaposition of small snippets of clips into a creative, coherent, cohesive whole.
Editing, be it film or video, is an art that can be used in diverse ways. It can create sensually provocative montages. It can be a laboratory for experimental genre. It can bring out the emotional truth in an actor's performance. It can create a point of view on otherwise obtuse events. It can guide the telling and pace of a story. It can create the illusion of danger where there is none, surprise when we least expect it, and a vital subconscious emotional connection to the viewer.
Please note, if anybody is under illusion that this is only true for fiction genre and not applicable to documentary and other features including cutting for news format is wrong.
Television and film use certain common conventions often referred to as the 'grammar' of these audiovisual media. This list includes some of the most important conventions for conveying meaning through particular camera and editing techniques (as well as some of the specialised vocabulary of film production). Conventions aren't rules: expert practitioners break them for deliberate effect, which is one of the rare occasions that we become aware of what the convention is.
1. Cut. Sudden change of shot from one viewpoint or location to another. On television cuts occur on average about every 5 or 6 seconds. Cutting may:
• change the scene;
• compress time;
• vary the point of view; or
• build up an image or idea.
There is always a reason for a cut, and you should ask yourself what the reason is. Less abrupt transitions are achieved with the fade, dissolve, and wipe. In a cut, the first frame of a new shot directly follows the last frame of the previous one. Grammatically, a cut is like the space between two words: a division between units of meaning that signals no change at all.
In classic editing, a cut should be nearly invisible because the action on screen moves across the division between shots in an uninterrupted flow. This enhances the illusion that the viewer is watching a continuous process instead of a bunch of discrete images.
Creating this illusion is easy when the shots show different subjects, such as close-ups of two different actors, because the viewer expects the image to change completely from shot to shot. But when two shots cover successive views of the same subject you must spackle the seam with two crucial editing techniques: matching action and changing camera angle.
In matching action you set the edit points so that the incoming shot picks up precisely where the outgoing shot leaves off. There are three ways to do this: continue movement, cut between movements, and start or end off-screen, as you can see from Figure 1.
Cutting in the middle of an ongoing movement is the hardest method but it delivers the most convincing illusion. In the outgoing shot of Figure 1a, the cup descends part-way to its saucer. Then the incoming shot starts with the cup on-screen and continues on its path toward the table. With precision matching, the two arcs seem like different views of the same continuous action. You can match continuous action with consumer-level editing decks if you're willing to practice with the deck's accuracy.
An easier way is to make the cut during a pause in the action, as shown in Figure 1b. Here, the performer completes the whole set-down in medium shot and the close-up starts with the hand and the cup at rest. With no movement to match, the edit is easier.
Simpler yet is the old off screen ploy (Figure 1c). The incoming shot starts before the cup enters the frame, so the viewer cannot compare its end position with its start position. With this method, you don't have to match action at all.
The method works equally well if you reverse it so that the outgoing cup ends on-screen and the incoming cup starts off-screen. And when you have a really difficult edit, try both at once: finish the outgoing and start the incoming shots with empty screens.
Whichever method you use, matching action does only half the job of concealing the cut. To perfect the illusion you must also shift the camera position. By moving the point of view, you change the subject's background and deprive the viewer of reference points for matching action.
As we've often noted, you can change three aspects of camera setup: vertical angle (from bird's-eye down to worm's-eye), horizontal angle (from front through 3/4 and profile to rear) and image size (from long shot to close-up). Figure 2 shows why it's tough to conceal a cut without changing at least one of these aspects and preferably two.
Figure 2a shows no angle change between the two shots and the obvious jump cut that results. Figure 2b changes one aspect: image size. If you're a slick editor you can make this cut work, but it's easier if you can change a second aspect as well. In Figure 2c the edit changes vertical angle as well as image size for a smoother transition.
Should you change all three aspects of a camera position? Maybe, but not necessarily. It doesn't add to the illusion and it can actually call attention to the edit because the viewpoint change is so great. On the other hand, an extreme angle change can be effective in building suspense precisely because it produces an effect of uneasiness or even disorientation.
Editing cuts
Match Cut
Description: Combining two shots of differing angle and composition so that the action continues from one to the other in the same time and place.
This shows seamless progression of action, focus on detail of action, provide a different view enhancing three- dimensionality, and add energy and increase pacing.
The shot above could be followed by a close up of the hands
Jump Cut
Combining two shots that are similar so that the subject jumps from one part of the screen to another.
It attracts attention and speeds up time.
Cutaway
It shows the subject, close up detail or person observing action. Subject is not seen in shots edited before or after cutaway. This is done to cover jump cuts, provide reaction of others to main action, and focus attention on subject.
Editing Transitions & Effects
Fade from and to Black
The image gradually appears from a black screen. Fade to black: image gradually disappears to a black screen. the purpose is to begin and end a video, it could be a transition between segments or scenes, or signify major change in time or location.
Dip to Black
A quick fade to black and then back to video. To go to or from a commercial break, quick transition between segments or scenes, or transition between footage and full screen graphics.
Dissolve
A transition between shots where one image is gradually mixed with another until the second image is full screen. To enhance emotions, soften changes between shots, accentuate rhythm of pacing, enhance artistry of action, and smooth jump cuts.
Wipe
A transition between shots that uses movement across the screen. Traditional wipes include changing the image with a move from right or left, up or down, or diagonally. Effects wipes include spins, flips, and animated moves. To show obvious transition between scenes, segments or graphics; add energy and action and increase pacing.
Super
Mixing two images together to show two views of subject at the same time, suggesting that main subject is thinking about the other.
Freeze
A single frame of video that is frozen on the screen to end action, accentuate moment or character, background for graphics, lengthen short shot.
Editing - Graphics & Titles
Lower Third Title
Text appearing in the bottom third of the screen. It identifies the name and title of interview subject, provide caption for image.
Full Screen Graphic
It’s a combination of text, background or artwork that fills the screen. For titles in the beginning of a video or a segment, key points or summations, charts and graphs, transition between segments or to or from commercials.
Editing -Techniques & Principles
B-rollIt refers to footage that covers an interview or narration audio. It is done to Illustrate what's discussed in audio, add energy and increase pace, cover audio track edits.
For example -someone talks, scenes relating to what the person is saying is shown.
Establishing the scene It’s a wide shot showing setting, to introduce the location for scene, provide sense of 3D space where action occurs, and introduce characters. Example -All the shots are wide showing people doing things
Changing the scene/segment
It is a Visual or audio cue that a new scene or segment has begun. It moves the story along, add variety to story, indicate passage of time or change in location. for example an establishment shot (with people talking). Followed by the main person talking.
Visual Sequence
it features a series of shots showing the subject or a process in action. To focus attention on action or process, show details of, show progression of action, engage viewer with subject and to facilitate comprehension. Example -a person applying make-up
Montage Sequence
A series of images usually set to music, that quickly show various aspects of the story. It shows passage of time, provide a glimpse of actions or events not covered in detail, capture viewer interest at beginning of video, sum up story at end, provide a change of pace, add energy.
Natural Sound
It includes ambient sounds of subjects overheard during recording. To enhance sense of reality, capture spontaneous speech of subject in the natural situation, establish the setting or situation, show transition between scenes or locations, provide background sound to narration.
Video Resolution
Video resolution is a measure of the ability of a video camera to reproduce fine detail. The higher the resolution the more distinct lines in a given space that the camera can discern, the sharper the picture will look. The standard NTSC broadcast TV system can potentially produce a picture resolution equal to about 300 lines of horizontal resolution.
Minimum Light Levels for Cameras/Threshold level
Television cameras require a certain level of light to produce good-quality video. This light level is measured in lux or foot-candles. The latter is used in the United States and lux is used in other countries.
A foot-candle is a measure of light intensity from a candle at a distance of one foot. Most professional video cameras require a light level of at least 75 foot-candles (750 lux) to produce the best quality video. However, some will produce marginally acceptable video under a few lux of light.
At low light levels the iris of a camera must be wide open (at the lowest f-stop number) to allow in the maximum amount of light. As the light level increases in a scene the iris of the lens must be stopped down (changed to a higher f-stop number) to maintain the same level of exposure on the camera target.
Under low light conditions video can quickly start to look dark with a complete loss of detail in the shadow areas. To help compensate, professional cameras have built-in, multi-position, video gain switches that can amplify the video signal in steps from 3 up to about 28 units (generally the units are in decibels or dB's). There are usually 4 decibels like 1,4,9 and 12. But, the greater the video gain boost, the greater the loss in picture quality. Specifically, video noise increases and colour clarity diminishes.
Night Vision Modules
For very low light situations night vision modules are used. They have electronic light multipliers to amplify the light going through a lens. The most refined of these can produce clear video at night using only the light from stars.Under conditions of no light most of these modules emit their own invisible infrared illumination, which is then translated into a visible they are used bycamera operators covering news especially covering nighttime stories where any type of artificial lighting would call attention to the camera and adversely affect the story being covered.
Camera Mounts
Using a camera tripod can make all the difference. Although it has to be carried and set up, the results can be well worth the effort especially when the subjects move .the legs of the tripod should be extended till it touches the ground. While choosing a tripod one should ensure that the legs are strong because if they are weak then gust of wind or the wind velocity at high altitudes could cause it to shake if the head of the tripod come separately the cost would jump up
Camera Pan Heads
On most tripods the pan and tilt head is not meant to be used for smooth panning and tilting while shooting, it is only to reposition and lock the camera into position between takes. a cut from one scene to another is faster and generally better than panning, tilting or zooming to new subject matter. Today there are many tripods have heads designed to smooth out pan and tilt movements. The most-used type is the fluid head. It provides an adjustable resistance to pans and tilts
Bean Bags
A simple camera bean bag that works in many situations is the beanbag. The beans inside are small round soft plastic cushion
Wireless Camera Modules RF camcorder transmitter
Although camera operators doing "live" broadcasts from the field used to have to be "hard wired" to a production truck, today's cameras can be equipped with an RF transmitter. The camera signal is transmitted to the production truck where it appears on a monitor just like any other source of video.
There are three types of equipment
Consumer- camcorders, handycams: these are used by people for taking family pictures or people who like to shoot as a hobby. They are not that expensive.
Portable professional equipment: These are used in small studios. More expensive, have to be handled with care, used for recording in the studios and outdoors.
In built professional equipment: Very expensive, built for the studio based applications. They are generally not taken outside as they very high precision and sensitive to outdoor conditions.
• Stability and durability
• Consistent results
• Good Image Quality – no picture noise, good colour definition
Studio Camera Mounts
In the studio, the entire camera assembly is mounted on a pedestal or dolly so that the operator can smoothly roll it around on the floor. The three wheels in the base of the pedestal can be turned using the steering ring. The camera is directly attached to a pan head, which enables the pan and tilt (horizontal and vertical) camera movements to be adjusted.
Controls on the pan head allow the camera to move freely, to be locked into position, or to offer controlled resistance to facilitate smooth pans and tilts.
Although the camera may weigh more than 100 pounds (50 Kg), internal counter-weights allow an operator to easily raise and lower the camera when the telescoping column in the centre is unlocked. The photo above shows some of the other key parts of a typical studio camera pedestal.
A simpler camera support is the collapsible dolly shown on the left. This type of mount is used for remote productions and in some small studios. Unlike the elaborate studio pedestal that can be smoothly rolled across a studio floor (even while the camera is on the air), the wheels on small dollies are intended primarily to move the camera from place to place between shots.
Robotic Camera Mounts
Camera operators are being replaced by remotely controlled, robotic camera systems .From the TV control room, technicians can adjust the pan, tilt, zoom, and focus and even remotely dolly and truck these cameras around the studio. They are convenient but undesirable for unpredictable or fast-moving subject matter, for programs such as newscasts and interviews.
Innovative Camera Mounts
The Segway HT Platform
The Segway platform can move over a smooth surface while automatically maintaining balance on its two wheels, the rider gently pulls the steering handle forward, back, left, or right.
The "Follow-Me" Camera Mount
As news departments reduce expenses, This mount can pan the camera to follow a reporter. As the reporter moves, the camera automatically pans left or right to keep the reporter centered in the frame. The unit will track an on-camera person within a 35-foot radius in a 4,000-sq. foot coverage area. The reporter has to wear a belt-pack transmitter, and the receivers on the extended arms on either side of the camera pick up the signal.
Camera Jibs
It is a long, highly maneuverable boom or crane-like device with a mounted camera at the end. They swing overhead at concerts and other events with large audiences, the two video monitors (one for camera output and one for program video) and the heavy weights that help balance the weight of the camera and crane. A jib allows sweeping camera movements from ground level to nine meters or more in the air. For more mobile camera work outside the studio, handheld camera supports allow significant mobility, while still offering fairly steady camera shots.
Camera Tracks and "Copters"
For elaborate productions, installing camera tracks allows the camera to more smoothly follow talent and move through a scene. Although a camera operator can ride with the camera, some cameras are remotely controlled. It can provide aerial views of various sporting events. A ground observer remotely controls the entire unit, and the unit's Omni directional microwave relays the video to the production van.
Gallery/Control Room Team
The following crew positions are only utilised on a multi-camera production. The Gallery or "Control Room" is a separate darkened area away from the studio floor where the action can be viewed across multiple monitors and controlled from a single source.
Television director - Director
a Director in television usually refers to the Gallery (or Control Room) Director, who is responsible for the creative look of a production through selecting which shots to use at any given moment. The Director views the action on the studio floor through a bank of screens, each one linked to one of the studio cameras, while issuing instructions down to the Floor Manager. They also control the Gallery area, calling for sound rolls, on-screen graphics (Astons) and video rolls (VT's). Some directors also work more closely with on-camera talent and others also act as both producer and director. he sometimes works as a switcher
Production Assistant
Commonly referred to simply as the PA, the Production Assistant assumes a prompting role in the Gallery or Control Room. They are responsible for communication with the broadcasting channel during a live show, counting down the time before transmission aloud to the crew via the studio microphone. They also count down time remaining for sections of a programme, such as an interview or an advert break. Prior to a production, the PA is responsible for preparing and timing the script, noting pre-recorded inserts, sound effects and suchlike, and for clearing copyright and other administrative issues. responsible for scripting shots and shot division.
Director of photography
he DOP (director of photography) is responsible for recording the visual image of the film. In artistic terms he or she uses the photography to enhance the telling of the story by manipulating the look or mood of a shot, drawing the audience's attention to one thing or another. The person is responsible to the producer
After reading the screenplay, DoPs meet with the Director to discuss the visual style of the film. They conduct research and preparation including carrying out technical reconnaissance of locations. They prepare a list of all required camera equipment, including lights, film stock, camera, cranes and all accessories etc., for requisition by the production office. During preparation DoPs also test special lenses, filters or film stocks, checking that the results are in keeping with the Director's vision for the film. On each day of principal photography, DoPs and their camera crews arrive early on set to prepare the equipment for the day's work. During rehearsals, the Director and DoP block (decide the exact movements of both actors and camera) the shots as the actors walk through their actions, discussing any special camera moves or lighting requirements with the Camera Operator, Gaffer and Grip. Each shot is marked up for focus and framing by the 1st AC, and, while the actors finish make-up and costume, the DoP oversees the lighting of the set for the first take. On smaller films, DoPs often also operate the camera during the shoot. At the end of each shooting day, DoPs prepare for the following day's work, and check that all special requirements (cranes, Steadicams, remote heads, long or wide lenses, etc.) have been ordered. They also usually view the rushes with the Director. During post production, DoPs are required to attend the digital grading of the film, which may involve up to three weeks of intensive work.
Vision Mixer or Switcher The Vision Mixer is responsible for the actual switching between different video sources, such as camera shots and video inserts. They also maintain colour and contrast balance between the studio cameras. The Vision Mixer is, confusingly, also the name of the equipment which the Vision Mixer operates.
Aston Operator or Graphics operator The Aston Operator prepares and displays on-screen graphics.
VT Operator The VT Operator cues and prepares video inserts into a programme. Heavily used in sports programming, they are also responsible for action replays and quickly editing highlights while a show is in progress.
Post production
Tasks, such as taking down sets, dismantling and packing equipment, handling final financial obligations, and evaluating the effect of the program, are part of the postproduction phase.
As computer-controlled editing techniques and postproduction special effects have become more sophisticated, editing has gone far beyond the original concept of joining segments in a desired order. Editing is now a major focus of production creativity because of the use the video and audio recordings to blend the segments together. Technicians add music and audio effects to create the final product.
The Editing Phase
After shooting is completed, the producer, director, and video recording editor review the footage and make editing decisions. This is often done in two phases: on-line and off-line.
In off-line editing copies of the original taped footage that contains time-code reference are used to develop a kind of blueprint for final editing. In on-line editing the original footage is used in editing. During the final editing phase, sound sweetening, colour balancing, and special effects are added.
Do Postproduction Follow-Up
This includes totaling up financial statements, paying the final bills, and determining the production's success (or failure). Ratings indicate success levels in broadcast television; it may be testing, program evaluations, and viewer feedback. Ratings are those numbers that can signal the end of TV programs. Lastly, the program is review promoted and broadcast.
The Crew
Pre-production
Everything before the shooting of the film is known as the pre-production stage. People involved in this stage include the director, the producer, the scriptwriter, the researcher, the set designer, the make-up artist, and the costume designer.
Editor
The editor works with the director in editing the film that has been shot. The director has the ultimate accountability for editing choices, but often the editor has contribution in the creative decisions concerned in piecing together a finalized product. Often, the editor comes into the picture when filming is still in process, by compiling initial takes of footage. It is an extremely long process to edit a television show, demonstrating the importance, and significance editing has on a production.
The editor follows the screenplay as the guide for establishing the structure of the story and then uses his/her talents to assemble the various shots and takes for greater, clearer artistic effect. There are several editing stages. In the first stage, the editor is supervised by the director, who spells their vision to the editor. Therefore, this first rough cut is often called the "director's cut,". After the first stage, the following cuts are supervised by one or more producers, who represent the production company and its investors. Consequently, the final cut is the one that most closely represents what the studio wants from the film and not necessarily what the director wants.
Sound editor
In television, the sound editor deals with the mixing, adjusting and fixing of the soundtrack. They usually have a major decision-making and creative role when it comes to sound and audio. A sound editor also decides what sound effects to use and what effects to achieve from the sound effects, edits and makes new sounds using filters and combining sounds, shaping sound with volume curves, and equalizing. A sound editor takes the Foley artist's sounds and puts them in place so it works with the picture and sounds natural, even if the sound is unnatural,a sound editor uses a sound effects library extensively, either self-compiled, bought or both, as many of the sounds don't get enough focus if they were taken straight from the shoot of the show.
Foley artist
The Foley artist on a film crew is the person who creates and records many of the sound effects. Foley artists, editors, and supervisors are highly specialized and are essential for producing a professional-sounding soundtrack, often reproducing commonplace yet essential sounds like footsteps or the rustle of clothing. The Foley artist also fabricates sounds that can’t be correctly recorded while filming, much like the sound editor does with digital sound effects.
Publicist
A publicist, or advertiser has the task of raising public awareness of a production, and ultimately increase viewers and sales of it and its merchandise. The publicist's main task is to stimulate demand for a product through advertising and promotion. Advertisers use several recognizable techniques in order to better convince the public to buy a product.
The publicist ensures the media are well aware of a project by distributing the show as a trial run, or a “sneak preview”; through press releases, interviews with members of the cast or crew, arranging exclusive public visits on set of the production, and creating media kits, which contain pictures, posters, clips, shorts, and trailers and brief descriptions on the show and the plot.
Composer
A composer is a person who writes the music for a production. They may also be the conductor of an orchestra who plays the music, or part of the orchestra. The composer is the originator of the music, and usually its first performer. The composer occasionally writes the theme music for a television show. A television program's theme music is a melody closely associated with the show, and usually played during the title sequence and end credits. If it is accompanied by lyrics, it is a theme song. for example the sitcom friends theme song was the ever popular “I will there for you”
Title sequence designer
A title sequence, in a television program, is shown at the beginning of the show; which displays the show name and credits, usually including actors, producers and directors. A montage of selected images and a theme song are often included to suggest the essential tone of the series. A title sequence is essential in preparing the audience for the following program, and gives them a sense of familiarity that makes them trust, and feel comfortable with the film. It is up to the title sequence designer to achieve this very goal, and make it catchy, entertaining, and appealing to increase the audiences feeling of positivity towards the show. Example – That 70s show, the six characters singing in the car.
Special effects coordinator
Special effects (SPFX) are used in television to create effects that cannot be achieved by normal means, such as depicting travel to other star systems. They are also used when creating the effect by normal means is prohibitively expensive, such as an enormous explosion. They are also used to enhance previously filmed elements, by adding, removing or enhancing objects within the scene. The special effects coordinator implicates these effects, and directs them with the help of the visual effects director. The task of the effects coordinator differs frequently, and can range from extensive over-the-top special effects to basic computer animation.
ADR editor
Automatic dialogue replacement (ADR) is the process of replacing dialogue that was recorded incorrectly during filming, with the actors voices recorded and put into place during editing. The ADR editor oversees the procedure and takes the corrupted dialogue, and replaces it with newly recorded lines to the actor's mouth on film to make it lip sync correctly.
The matte artist or blue screen director
Blue screen is the film technique of shooting foreground action against a blue background, which is then replaced by a separately shot "background plate" scene by either optical effects or digital composting. This process is directed and coordinated by the blue screen director. The matte artist is a part of the special effects department who assists in making scenery and locations that don’t exist. They assemble backgrounds using traditional techniques or computers that mix with the footage filmed to create a false set. Both are fairly alike, but blue screen technology is more modern and more widely used.
Studio Hand Signals
Although the studio director can relay signals to the crew via a headset (PL line), getting instructions to on-camera talent while the microphones are on must be done silently through the floor director. To do this the floor director uses agreed upon hand signals. In order for the talent to be able to easily and quickly see these signals they are given right next to the talent's camera lens. The talent should never have to conspicuously look around for cues when they are on camera.
Shooting Angles
In an interview the eyes and facial expressions communicate a great deal often even more than the words the person is saying.
Look at the picture -Profile shots (equivalent to shooting the close-ups from camera position A) often hide these important clues. A close-up of the guest from camera position B, as well as a close-up of Dr. Lee from the camera 2 position, provide much stronger shots.
e a strong close-up of the person talking can be obtained, if you zoom back slightly, an over-the-shoulder shot that can even be used to momentarily cover comments by the person whose back is toward the camera.
The Need to Anticipate
An essential talent for a director is the ability to react quickly to changes in action. In fact, the total reaction time is equal to the accumulated time involved in recognizing the need for a specific action, communicating that action to crew members, having them respond, and then telling the technical director what you want done. That can represent a delay of several seconds.
Although that may not seem long, when audiences are used to seeing production responses in sync with on-camera action, it will clearly reveal that the director is lagging behind the action.
The solution is for the director to try to anticipate what's going to happen.
During an interview a director should be able to sense when the interviewer's question is about to end or when an answer is winding up. By saying standby "early and calling for a camera cut a moment before it's needed, a director will be able to cut from one camera to the other almost on the concluding period or question mark of the person's final sentence.
Also, by watching the off-air monitor in the control room, as opposed to the on-air shot of the person talking, the director will often be able to see when the off-camera person is about to interrupt or visually react to what is being said.
Electrical and lighting equipment
Large amounts of power are usually needed for camera lights and everyday electrical needs on a set. on location, independent sources of power are used. Examples of electrical hazards include shorting of electrical wiring or equipment, inadequate wiring, deteriorated wiring or equipment, inadequate grounding of equipment and working in wet locations. All electrical work should be done by licensed electricians and should follow standard electrical safety practices and codes. Safer direct current should be used around water when possible, or ground fault circuit interrupters installed.
Lighting can pose both electrical and health hazards. High-voltage gas discharge lamps such as neons, metal halide lamps and carbon arc lamps are especially hazardous and can pose electrical, ultraviolet radiation and toxic fume hazards.
Lighting equipment should be kept in good condition, regularly inspected and adequately secured to prevent lights from tipping or falling. It is particularly important to check high-voltage discharge lamps for lens cracks that could leak ultraviolet radiation.
Cameras
Camera crews can film in many hazardous situations, including shooting from a helicopter, moving vehicle, camera crane or side of a mountain. Basic types of camera mountings include fixed tripods, dollies for mobile cameras, camera cranes for high shots and insert camera cars for shots of moving vehicles. There have been several fatalities among camera operators while filming under unsafe conditions or near stunts and special effects.
precautions for camera cranes should be taken like testing of lift controls, ensuring a stable surface for the crane base and pedestal; properly laid tracking surfaces, ensuring safe distances from high-tension electrical wires; and body harnesses where required.
camera cars should be used those are cars that have been engineered for mounting of cameras and towing of the vehicle to be filmed are recommended instead of mounting cameras on the outside of the vehicle being filmed.
Filming location
Filming in a studio or on a studio lot has the advantage of permanent facilities and equipment, including ventilation systems, power, lighting, scene shops, costume shops and more control over environmental conditions. Studios can be very large in order to accommodate a variety of filming situations.
Filming on location, especially outdoors in remote locations, is more difficult and hazardous than in a studio because transportation, communications, power, food, water, medical services, living quarters and so on must be provided.
the film crew and actors are exposed to hazardous conditions, including wild animals, poisonous reptiles and plants, civil unrest, climate extremes and adverse local weather conditions, communicable diseases, contaminated food and water, structurally unsafe buildings, and buildings contaminated with asbestos, lead, biological hazards and so on. Filming on water, in the mountains, in deserts and other dangerous locales poses obvious hazards.
On-Camera Talent Issues Pressure
Makeup
In olden days in low-resolution black-and-white TV, facial features had to be somewhat exaggerated, just as they do now on the stage. Today, makeup is primarily used to cover or diminish facial defects, fill in deep facial chin clefts and five o'clock shadows on men, and to take the shine off faces.
However, when professional talent need to appear at their best under different lighting conditions and for long periods of time, things can get a bit more complicated.
The use of makeup is divided into three categories:
*Basic - designed to compensate for undesirable changes in appearance introduced by the television process.
* Corrective - designed to enhance positive attributes and downplay flaws.
* Character - which introduces major changes in appearance?
Hair
For limited on-camera appearances, no special changes need to be made from normal hair styling. Stray hairs have a way of calling attention to themselves when close-ups are illuminated by backlights, so stray hair needs to be kept in place.
When applied to hair, oils and creams can impart an undesirable patent leather-like shine because it forms a gelatin coating, which will be exaggerated by backlighting. The absence of hair like bald heads may need help from a powder base carefully matched to skin tones.
Backlights and blond hair, especially platinum blond hair, will cause video levels to exceed an acceptable brightness range, so backlight intensity will need to be dimmed or the beams barned off. When it comes to the effect of backlights and even lighting in general, camera shots and lighting should be carefully checked on a good video monitor before a production.
Jewelry
Jewelry can represent two problems. First, if it's highly reflective, the results can range from a simple distraction to the creation of annoying trailing streaks in the video.
The solution is to either substitute non-reflective jewelry or remove it all together. If this isn't an option, dulling spray can be considered. Dulling spray should come off with cleaning, but before you use it, especially with expensive jewelry, you will want to make quite sure there are no lasting effects.
The second problem with jewellery is noise.
Intricate piece of jewellery causes a moiré effect the details appear real close to each other and they appear to jump so the main resolution goes down the guest should be told to avoid this.
DIAMONDS SPARKLE and also result in ugly BURNS.
Wardrobe
In general, clothes that are stylish and that flatter the individual are acceptable -- as long as five caveats are kept in mind.
*Colours that exceed 80-percent reflectance, such as white and bright yellow, need to be avoided. White shirts are often a problem because the reflectance value is higher than the face on TV the exposure of the clothes will appear higher than the face which will be darker.
*Black clothes, especially against a dark background not only can result in a tonal merger, but adjacent Caucasian skin tones can appear unnaturally light, even chalky.
*Closely spaced stripes in clothing shiny clothing can interact with camera scanning and result in a distracting, moving moiré pattern.
* Very bold patterns can take on a distracting and facetious appearance.
* Sequined, metallic, and other shiny clothing (note this photo), which might otherwise look good, can become quite distracting on television, especially under hard lighting.
Pyrotechnics are used to create effects involving explosions, fires, light, smoke and sound concussions. Pyrotechnics materials are usually low explosives mostly Class B, including flash powder, flash paper, gun cotton, black powder and smokeless powder. They are used in bullet hits (squibs), blank cartridges, flash pots, fuses, mortars, smoke pots and many more. Class A high explosives, such as dynamite, should not be used, although detonating cord is sometimes used. The major problems associated with pyrotechnics include premature triggering of the pyrotechnic effect; causing a fire by using larger quantities than needed; lack of adequate fire extinguishing capabilities; and having inadequately trained and experienced pyrotechnics operators. The experts have to be able to control these effects,
Stunts
Only experienced stunt performers should attempt fall stunts. When possible, the fall should be simulated. For example, falling down a flight of stairs can be filmed a few stairs at a time so the stunt performer is never out of control, or a fall off a tall building simulated by a fall of a few feet onto a net and using a dummy for the rest of the fall. They should be insured
Animal scenes
Animal scenes are potentially very hazardous because of the unpredictability of animals. Some animals, such as large cats, can attack if startled. Large animals like horses can be a hazard just because of their size. Dangerous, untrained or unhealthy animals should not be used on sets. Venomous reptiles such as rattlesnakes are particularly hazardous. In addition to the hazards to personnel, the health and safety of the animals should be considered.
Only trained animal handlers should be allowed to work with animals. Adequate conditions for the animals are needed, as is basic animal safety equipment, such as fire extinguishers, fire hoses, nets and tranquilizing equipment.
Water stunts can include diving, filming in fast-moving water, speedboat stunts and sea battles. Hazards include drowning, hypothermia in cold water, underwater obstructions and contaminated water. Emergency teams, including certified safety divers, should be on hand for all water stunts. Diver certification for all performers or camera operators using self-contained underwater breathing apparatus and provision of standby breathing equipment are other precautions.
Fight scenes can involve performers in fistfights or other unarmed combat or the use of knives, swords, firearms and other combat equipment. Many film and stage fights do not involve the use of stunt performers, thus increasing the risk of injury because of the lack of training.
Simulated weapons, such as knives and swords with retractable blades, are one safeguard. Weapons should be stored carefully. Training is key. The performer should know how to fall and how to use specific weapons. Adequate choreography and rehearsals of the fights is needed, as is proper protective clothing and equipment. A blow should never be aimed directly at an actor. If a fight involves a high degree of hazard, such as falling down a flight of stairs or crashing through a window, a professional stunt double should be used.
Vehicle action sequences have also been a source of many accidents and fatalities. Special effects, such as explosions, crashes, driving into rivers and car chase scenes with multiple cars, are the most common cause of accidents. Motorcycle scenes can be even more hazardous than automobiles because the operator of the motorcycle suffers from the lack of personal protection.
Film uses emulsion layers to capture the image while video tape converts the image into electrical data on a magnetic strip, the advantage of this is that it can be overwritten or by passing it through a magnetic field the tape can be wiped and re-used.
Relative Equipment Durability
Film equipment is more durable than video equipment because film cameras are easier to operate and understand; professional video equipment can be used now under extreme conditions because of the technology available but it still colour film stocks suffer from colour shifts and other such problems when it is stored in such high temperatures
Technical Quality Compared
in the presence of controlled production conditions 35mm films are slightly inferior to videos in respect to sharpness and colour fidelity, if the latest video equipment is used for broadcast. This is primarily because if a signal from a video camera is recorded on the highest-quality process, no major difference can be made between the picture coming from the camera and the picture that is later electronically reproduced.
With film first the image is recorded on negative film. then a master positive or intermediate print is made from the original negative film. From the master positive a dupe (duplicate) negative is created; and from that a positive release print is made. This adds up to a minimum of three generations.
At each step colour and quality variations are introduced by film emulsions and processing, there is a general optical degradation of the image, and the inevitable accumulation of dirt and scratches on the film surface starts.
After this process the film release print is projected into a video camera to convert it to an electronic signal, which is where the video signal started out in the first place.
Film is based in a mechanical process. As the film goes through the gate of a camera and projector there is the unavoidable loss of perfect registration. This called a judder, and it results in a slight blurring of film images.
the sharpness of video can be a negative thing, sometimes the softer look of film is desired .There are also subtle tonal and colour changes with film, which, the true values of the original subject matter, are not acquired, the slightly sharper image of video is associated with serious events like news
Coping With Brightness Ranges
First the video cameras simply could not handle the brightness range of film. the brightness range was limited to 30:1
Exposure can be controlled through professional tube-based video cameras, so the film had a major advantage over video. Now because of current technology this problem is not faced.
Film and Videotape Costs
The minute-for-minute cost of 16mm and 35mm film and processing is hundreds of times more than the cost of broadcast-quality video recording.
Unlike film, tape is reusable, which results in even greater savings.
However the initial cost of video equipment is high, the initial investment in video production and postproduction equipment can easily be ten times the cost of film equipment. The maintenance cost is also greater but the costs are lower for in using video for postproduction (special effects, editing, etc.).
Because of these advantages and disadvantages it is often said that video will replace film in motion picture work. The move is well underway.


Video Production

The video production industry creates videos for a wide range of demands, from safety videos for use in corporate environments to medical training videos for use in teaching.

A video production company takes a brief, produces a script, liaises with the customer and puts a production team together. This often includes experts ranging from camera staff to make-up artists. The film is shot and initial footage is put on broadcast quality tapes, edited and presented to the customer in a draft. Sound tracks, visual effects etc. are added in and the final video is presented to the customer. Video-graphy, is the art and service of producing a finished video product to a customer's requirement and consumption.

Television

Television is a telecommunication system for broadcasting and receiving moving pictures and sound over a distance and also refer to all the aspects of television from the television set) to the programming and transmission. It comprises of the TV set satellites, the cables etc.

TV organisations

Network studio operations they have specialised personnel, the latest cameras equipment. The production crew may work on on assignment or program at a time or a series of programs this again depends on the size of the organisation.

Video production Units


  • In house units- they produce pieces to be used by their own business. They can be freelance too.
  • Corporate video units. They are mainly for staff training etc.
  • Campus studio units. They are instructional in nature. They hire the production crew as per the assignments and rely on hired equipment. They produce anything from pop videos to ads.
     


Television Vs Video Production

TV production falls into two main categories -- Television and Video.

1. Television is the production of live or live-on-tape programming using real time switching between multiple cameras. This process takes place in a studio setting or at a remote venue for events such as sports, concerts, plays, graduation, etc.
Video is defined as using videotape shot on location with a portable camera and then using post-production equipment and film techniques to edit the final production.

2. Television takes more equipment, pre-production planning and setup time. Eventually, since most studio or sports productions will use the same equipment, configuration and basic script.
Video production may take less equipment and setup. However, video productions often require a detailed pre-production script and storyboard. The cinematographers must be skilled in film making techniques to proved the editor with:
· Establishing Shots
· Close-Ups
· Cut Always
· Reaction Shots
· Walk In/Out of frame
· Cut On Action)

3. There is less control of sound and lighting when shooting in remote situations. The video and audio editing for even the simplest productions can be a time intensive operation. The rule-of-thumb is one hour of production time for each minute of finished product.

Bibliography