Friday, January 30, 2009

Spacial Operating Environments

Today, I want to share a video with your that offers a glimpse into the work of Oblong Industries - the developer of the g-speak spatial operating environment. The reason I chose to share this video is because it's probably the most accurate depiction of what I believe the future of communal, photo-browsing will look like (not to mention a number of other applications).

I've been ranting a lot lately about the methods used to merge photos taken by a number of unassociated people into a single viewing platform. I've also mentioned that I don't feel the technology is quite polished yet. Sharing visions of how of a seamless, collective image browser ought to look like is most difficult, especially in writing. Photosynth is very close in the way they mimic a 3D environment; gigapixel technology also has it's seamless scaling which is a must. There's the work they're doing at Stanford University that allows for a sense of depth from a 2 dimensional photo, and of course, Google can't be left out as they are leaders in their various efforts to map the world. Equirectangular applications bear importance too. Hmmm - like I said, sharing a vision is most difficult.

Technology like g-speak offers the intuitive flow, sense of movement and control I see image browsing getting to years and years from now. There's also the ability to manipulate and compose images the way you want. Anyway, have a look, I'm sure you'll like what you see.



Now was that awesome or what?

Monday, January 26, 2009

The Bigger the Better: A Look at Gigapixel Photography

The technology behind gigapixel photography is relatively simple in theory: instead of engineering a super-high resolution camera, why not just combine a large number of smaller photographs taken at sequential angles?

The process is quite similar to that of equirectangular photography in that massive panoramas can be created that go well beyond the scope of any one lens. The creation process for gigapixel photography uses an electronic tripod head that automatically adjusts the angle and tilt of your camera so that a perfect multitude of overlapping still photos can be taken. xRes refers to the process as Automated Tile Acquisition - they've provided a video that demonstrates the process here. In the chart below, we can see how the photos are gridded out so that a final, merged image can be produced. It is imperative that sufficient overlapping take place so that the individual images can be seamlessly stitched. GigaPan, a forefront leader in this technology, has developed a three-tier system for those interested in attempting gigapixel photography. They provide, for one, a robotic camera mount that is designed to work with most compact cameras. Second, they have programmed a software solution called GigaPan Stitcher which is essential for merging the single shots into a final product, and lastly, they provide an online community for enthusiasts to share their creations. You can buy GigaPan products here.


The global embrace of flash-based image viewing solutions have made it easier than ever to view these massive images without slowing up your computer to a mere crawl. I recommend upgrading your Adobe Flash Player to version 10, if you haven't already done so.

Although quite new as far as photographic technology is concerned, gigapixel images range widely in theme and purpose. The most common are ultra-wide panoramas of natural environments from around the world. A few other examples include:

Historic Events - I talked about the impact of new photo technology on history a few days ago. Gigapixel images prove to be yet another way that people generations from now will be able to experience significant events.

Artwork - Never before have the world's most precious works of art been subject to such refined analysis. Kudos to the Halta Definizione for embracing this technology and providing online viewings of some fabulous work.
Fun & Games - The potential for "Where's Waldo" style online games is one of many innovative uses. I can't wait to see what others do to spur some attention for their work. Click the image below to try and find the bunnies in Gordon Atwell's gigapixel image.Gigapixel technology is growing fast due to it's simplicity and effectiveness. As mentioned in previous articles, however, I believe this to be but one facet of something larger to come. Just as the integration of numerous images are combined to make a gigapixel photo, we will soon see a merge in technologies and software-based processes that will provide interactive viewing experiences far grander than we have ever seen before.

Saturday, January 24, 2009

The History of Digital Photography

The road to digital was paved by innovative advancements in analog technology, and because some of the analog systems of the 1980s resembled the processes of today's digital cameras there has been some misinterpretation as when digital photography actually began. The answer is a bit ambiguous because digital photography as we know it is not the advancement of one technology, but the merge of a number of ideas and processes that did not fully solidify until the late 1980s.

The first concept that must be credited was produced by Kodak in 1975. Known most commonly as the Digital Camera Prototype, this beast of a machine was capable of capturing 100x100 pixel, monochrome images onto blank cassette tapes. Now, to be clear, the camera used analog processes to produce a final image, and therefore is not the world's first digital camera. However, many of the ideas used here allowed others to gain insight into how digital photography could be taken into fruition.


By 1981, analog systems had come a long way. Enter the Sony Mavica: a state-of-the-art magnetic video camera that could capture 50 still images at 570x490 pixels. The images were recorded onto 2" video floppy disks, which resembled computer diskettes. The video floppy was adopted by other key players like Canon and Panasonic and used until the early 1990's in products like the Canon Xapshot. Again, however, despite similarities in usage and design, these were not yet digital cameras. The recording process was entirely analog, but the ideas and concepts provided further insight into purely digital systems.

By 1988, Fuji had made great strides in their pursuit of a digital photographic solution, and unveiled the Fuji DS-1P. This beauty is regarded by many as the first true digital camera. It recorded images onto SRAM memory cards produced by Toshiba, but was unfortunately never marketed in the United States. By now, the gates had been opened though, and the road to modern digital technology had undoubtedly begun.


Finally, in 1990, after fifteen years of analog and digital advancements, the world was introduced to the Logitech Fotoman, also know as the Dycam Model 1. It was the first digital camera to be marketed, and is rightfully the first consumer digital camera. It was capable of capturing 284x376 pixel stills onto an internal memory chip. As a pioneer camera of sorts, one can only expect there to be disadvantages. The meager resolution was one thing, but the other limitation was that it could only capture monochrome images.

By 1994, Apple Computers entered the digital photography game with the Apple Quicktake 100. The line of Quicktake cameras utilized technology built by Kodak and Fuji to achieve a 640x480 pixel image at 24-bit color. Advancements here included: the ability to choose between 320x240 and 640x480 resolution, an included close-up lens adapter, and a "trash button" that would clear the camera's internal memory. There was still no way to preview images on the camera body itself, but an included serial cable made transfer to the computer convenient. The camera retailed for $749.00 US.

Now, don't be fooled by this modern looking unit. The camera is actually a film SLR - Nikon F90X, but by 1995, the pro-market had made leaps and bounds in the digital world as well. The revolutionary "film back" allowed a standard film camera to be modified into a high-end digital camera by replacing the back door with one that captured digital stills. Produced by Kodak, the line started in 1991 with the Kodak DCS100 (1.3mp camera back) and by 1995, had grown to the DCS460, a 6.2mp interface that cost over $35,000 (although they were later cleared out at $2500).

The return of the Sony Mavica in the late 1990's was now fully digital, unlike it's same-name predecessor of the early 1980's. The series initially recorded directly onto 3.5" floppy disks before the Memorystick was released. A floppy-disk adapter was later used after the Memorystick release, and eventually the camera wrote directly to the proprietary memory card. This was the first digital camera I personally used, although it belonged to a friend. I remember wanting one, but not having the spare $500 to get it - I was 16 years old.

The turn of the millennium gave way to some stunning advancements in the field. The Contax N Digital was the world's first digital SLR with a full-frame (35mm film sized) sensor. It recorded 6mp stills at 3fps onto Compact Flash cards and Microdrives. It recorded jpg, tif, and RAW file types, and used a sensor designed by Philips. Contax began producing photographic hardware in the mid-1930s, but shorthly after the digital revolution, it announced that it would no longer be producing cameras (April 12, 2005 to be exact).

In 2004, Nikon set a new standard in digital photography with the announcement of the Nikon D2X. With 12.4mp resolution and up to 8fps shooting speed, this baby was geared for the photo professional. The camera was fast, accurate and delivered georgeous images. I first shot with the D2X in 2005, and was blown away by the perfect skintones, refined contrasts, and blazing speed that it delivered every time. Although now almost half-a-decade old, it is still better than the majority of cameras currently on the shelf.

2008 marked the unveiling of something truely special - the Hasselblad H3DII-50. A camera that most will only dream of shooting, this 50mp powerhouse can shoot images up to 8176×6132 pixels large, and uses a 645 format sensor. The size of the sensor, the quality of the lenses, and the impressive history of this brand attracts only the most finicky of professionals that demand the best for what they do. With a pricetag around $40,000 US, the best is what you'd expect, and I haven't heard of anyone being disappointed. I want one.

Throughout 2009, we're going to see some new players gaining strength. RED, for instance, has introduced an impressive modular system to the world of digital photography meaning that cameras can now be pieced together based on the user's needs, and further modified/upgraded on a more selective basis. The merge of pro-photo and pro-video gear will continue, with more products designed to compete with cameras like the Canon 5D Mark II.

Personally, I find it ironic that digital photography began with a process of extracting still images from high-end video cameras, and has run, full-circle, to the point where video can now be created using high-end photographic systems. I suppose the ouroboros effect is true. What will the future bring? Only time and a few PMA events will tell.

Thursday, January 22, 2009

Innovative Concepts #2 - Real-Time, Long-Exposure Preview

"Innovative Concepts" is a series of posts about technology that doesn't exist, but maybe should - you decide.

For most photographers, shooting long exposures is a random, and often spontaneous event that is undertaken when the perfect conditions present themselves. Clear nights with lots of stars, busy traffic in the heart of a thriving metropolis, or a flowing stream of water through moss-rich foothills are but a few of the most common examples. Finding the perfect exposure time can be challenging but there are a number of great guides available online for determining shutter speeds that are very helpful for those looking to learn some tips and tricks. Additionally, with EXIF or metadata readily available for a high percentage of shots posted online, it's also become much easier to attempt to duplicate your favorite long exposures.

When I was out shooting some long exposures a few days ago in early dawn, I had very little trouble finding the exposure I needed. With a little trial and error, and some bracketing I quickly found the shot I was looking for. That's because I was shooting 4-8 second exposures, and I had ample time to preview my shots and make minor adjustments as required. It's not as easy though when one exposure requires many minutes or even hours to complete. In cases like these you may only get one opportunity to get it right.

Now with already existent technology like Canon's Digic 4 processor, HD video modes, and live preview I started wondering the other day why it was not possible to have a streaming, live feed of an exposure in progress. Consider this: you're shooting a long exposure that requires several minutes to complete. You set your camera to bulb mode and begin the shot. Periodically over the next few minutes you check the LCD screen to see how the exposure is coming along. When it looks just right, you close the shutter and the exposure is completed. Alternatively, if you notice that you blundered the settings, you can stop the shot mid-exposure and start again, without missing out on a rare opportunity.


I, for one, would love to have this option available. How about you?

Stay tuned for more Innovative Concepts from the Binary Crumbs, and if you bigwigs over at Canon or Nikon ever stumble upon this "regular guys" blog, we accept cash and cheques for our ideas ; )

Wednesday, January 21, 2009

The Framework for Dynamic Image Maps: An Imminent Technology Merge

I spoke a while ago about how GPS poses to change the way the world approaches photography. In that article I mentioned that "having exact times, dates, and locations embedded into your photos allows people living generations from now to use the photos as a historical reference." I wanted to take that concept a bit further today and discuss it in terms of worldwide image mapping.

When I talk about the future of photography and how it will become a tightly integrated series of photos that are categorized not only by image content, but by geo-location and time, I wanted to be clear that time data has an extremely important role to play here, especially when combined with the already significant GPS information. The end result, in my opinion, will not be a static image of the entire world that uses pictures from all people, satellites, and other imaging processes, but a dynamic moving image that changes as the earth does, and progresses constantly in parallel with time itself.

Take the Inaugural Celebration held at the Washington D.C. National Mall yesterday, for instance. This celebration is one that will never be questioned in terms of when and where it happened. However, the images compiled from this celebration combine to create but a small fraction of all the events that have happened here in the past, and that will surely happen in the future. Thanks to GeoEye we can see this moment in time from the perspective of satellite imagery (left). Imagine a final composition that takes this satellite image and merges it with all of the other photos taken at the same time by countless onlookers - much like Photosynth has compiled here. Furthermore, let's take these two technologies and incorporate the impressive Google Street View technology. Considering these recent advances in dynamic image viewing software, I believe there is a overall "moving picture" that is yet to be seen that utilizes individual photo uploads from the masses of people involved - perhaps even one day, viewable in real time. The question is not a matter of if a collective movement will take place; rather it is about who the forefront leader in the creation of this technology will be, and when the merge of what we currently have will take place. There are monstrous strides being taken on a variety of fronts, but it is when all these paths collide that something truly epic is going to happen.
What is also missing from this collective movement towards 3D image viewing is the fourth dimension, time. None of the power players (Google's Street View, Geo Eye's satellite imagery, and Microsoft's Photosynth) are offering a "time toggle" because there simply hasn't been a long enough section of time captured yet. As years pass though, I believe we can expect time to play an important role much like geo-tagging is now. The impact will undoubtedly be monumental as it will provide humankind with a new tool to observe world history, and reroute the current approach to news, journalism, and broadcasting.

Thursday, January 15, 2009

Critiquing and Editing Your Photos: The Basics

There are literally thousands of tutorials out there that explain photo color correction, level adjustment, and so forth. The majority of which take a sample image and provide step-by-step guidance to achieve an impressive final result. I won't argue - there are some complicated editing techniques out there that require rigid steps to be followed very closely, but in my opinion, standard post production (color, levels, contrast) is not one of them. So instead of taking a photo and showing you how I'd correct it, I just wanted to brush over what you should look for in all photos before they are printed or shared. Each photo is different and will require individual attention - there is NO one technique that will work for all of them.


Question #1 - Is white, white?
In other words, was the camera white balanced properly? You'll be able to tell immediately if your photo has a bland yellow or blue tint to it. Why yellow or blue? Because those are the color temperatures (degrees Kelvin) that standard photographic sensors and films are most sensitive to.
For instance, indoor light is often around 3000-4500 degrees and will appear yellowish is your camera is set to daylight (6000 degrees). In contrast, shady or cloudy overcast days may generate a blueish tint to your photos. If it is a bit blue or yellow, proceed to question #2; if the color looks good go to question #3.

Question #2 - How the heck do I correct this crappy color?
The first step according to a number of tutorials I've seen is to rush into your RGB color balance and start making modifications. That is wrong - first, you should adjust the color saturation. Not for the entire color spectrum - just the appropriate color channel. If you don't desaturate the image first , you're just turning all of that terrible yellow into terrible blue.

So, if your image looks too yellow, then desaturate the yellow channel until the image appears more color-accurate. Then open up color balance and fine tune it. In RGB, the opposite of blue is yellow. Hmm, coincidence? I think not.

Question #3 - If the photo too dark? Too light?
Jump right into your level adjustments. Mid tones are a good place to start. Fine tune with your darks and lights, being careful not to overexpose your lights, or underexpose your darks. Practice makes perfect. Learn curve adjustments and be able to read histograms. Trust your eyes though, and never rely on a histogram to tell you what looks right.

Question #4 - Does my photo look drab?

If your image still looks like it needs to "pop" then add some contrast, and adjust the brightness accordingly.

Question #5 - Is it framed well?
If not, crop it. General cropping rules: straighten your horizon line, learn the rule of thirds, stick to standard photo sizes unless you have a specific need (2:3, 4:5, or 11:17 ratios are good), and size at 300 dpi or higher.

Question #6 - How do I save it?

Archive your photos as .tif files. They will not be subject to image compression inherent in the .jpg file type. Always keep a copy of your original, at least until you've mastered the above techniques.

You're Finished!!

If you're ready to take your image a bit further, then feel free to sharpen it, apply a noise reduction filter, clone out dust or hot pixels, convert it to B&W, and/or play around with HDR techniques. Don't even bother with any of these until you're comfortable with standard color photo correction. Master that first - trust me. I've seen more failed attempts at advanced editing then I can count.

In a nut shell, life's too short for crappy photos. Cheers.

Wednesday, January 14, 2009

Pro Photos | Hobo Prices

As many of you know the biggest battle for a photographer is getting enough light. Being able to operate under a minimal amount of light usually requires spending a maximum amount of cash – both wide and telephoto lenses that are a constant f2.8 are hard to find under 1000 bucks. Now I know some of you are thinking that a good option is to pick up a Sigma, Tamron, or Tokina lens to cut the cost, and it’s true, you could do just that. I wouldn't, as you will still likely be spending over 500 dollars on an aftermarket alternative, but you could. Furthermore, there is another way to drop that aperture way down so let’s explore. Here are a few scenarios that often require lenses with apertures of f2.8 or better:

Live Performance
Whether it’s a band playing a show in a dark bar, or your kid's first piano recital, you likely won't have a clue what the light will be like, but if one thing’s for sure it’s going to be bad. If you want to avoid camera shake you can get a basic lens with an image stabilizer, but if the subject matter has any movement (be it a six-year-old's hand on the keys, or a crazy hipster doing rock kicks) the image stabilizer isn't going to reach out and slow them down. The only way you will freeze action is with an appropriate shutter speed (i.e. 1/60 for The Ivories or 1/500 for Eddie Van). Always remember photo nerd rule number one: when aperture goes down shutter speed goes up.
Portraits
We have all seen the shot of the pretty girl where her eyes are tack sharp but the back of her head, even as close as the ear, is a nice soft focus. Aperture not only has a direct link to shutter speed, but, perhaps even more important, it controls what or how much will be in focus. This is commonly referred to as depth of field (DOF). This is one of the harder things to fully understand. Let’s put it this way: if you have a dog, a man and a tree all in a row, and you are looking at them so the dog is closest to you, then the man, and off in the distance is the tree. Now, assuming your focus point never changes (for this example, let’s say the focus is on the dog’s nose), as you move the aperture to a higher number more will come into focus. So at f2.8, the dog's noise will be sharp, but the rest will be blurry; at f4 the whole head will be in focus; at f5.6 the entire dog; at f8 all of the dog and some of the man; at f11 the dog and the man; at f16 - the dog, the man and some of the leaves of the tree; and at f22 the dog, the man, the tree, and the axe wielding maniac on the second branch are all in focus. Please note, that this is generally simplified for example purposes.

Sports
Take the example above and think about how useful fast lenses would be for sports. Bad light situations for indoor or late night games are the norm. Freezing action for any sport that involves moving (with exception to all those competitive staring contest enthusiasts) is difficult. For instance, isolating the objects, or in this case players, with minimal depth of field so that they are in focus and not the Gatorade container behind them. Think of those classic shots of a quarterback getting ready to throw in a crowded stadium. You see him clearly, but you can't make out faces in the crowd. To add further benefit to the image the lower aperture also means you can decrease your ISO, and lower ISO = less noise.

If all that was really obvious to you, then consider it a refresher. My point is that fast lenses, or ones with low apertures, let in lots of light and are very useful. But how do you get one without having to sell a kidney?

The answer: the 50mm f1.8 - a lens that has been around forever, and therefore, easily forgotten. This was commonly referred to as a standard lens, due to the fact that your eyes at their widest see in around the 43mm to 46mm range. The beauty of this lens is that for the most part they are under 150 dollars brand new (sorry Nikon D40, D40x, and D60 users, you will have to shell out more for a 50mm f1.4 due to camera restrictions). So you get a very fast lens with a very versatile range - somewhere between a standard and portrait lens depending on your camera's crop factor for about the same price as a couple extra batteries. The drawbacks are that it is a fixed lens, meaning if you want to get closer or wider you have to move backwards or forwards. Try to think of this as a benefit. As not having a zoom lens makes you think about your angles, composition, and overall aesthetics more. The lens opens a huge world of opportunity, whether it be shooting in bad light, freezing action, cleaning up noise, or controlling what the viewer focuses on and best of all it’s very inexpensive.

Things to keep in mind with a 50mm lens are:
  • The DOF of an f1.8 lens is very narrow. One of the first realizations I had of this was while shooting a live band. I quickly realized that when shooting the lead signer (who was playing a guitar) from a low angle, I could have his face in focus or the neck of the guitar in focus but not both.
  • Fixed lenses may not get you as close as you would like - especially for sports. Don't be afraid to crop your photos, but don't forget to keep thinking and moving. Never shoot a photo under the assumption that you can fix it on you computer.
  • The lower the lens aperture, the more flexibility you have in finding the focus “sweet spot.” Keep in mind that this is not always true, but is more common than not with a 50mm lens. When you have enough light and you can stop down your lens, it will be very sharp.
  • Most 50mm lenses are at their sharpest at f8 or f11, making this lens advantageous for more than just bad lighting situations.

Wednesday, January 7, 2009

Bringing it Back Old School: A Look at TTV Photography

We've been pretty focused on the latest and greatest innovations in photographic technology lately, so I wanted to take a moment and discuss a photographic style that merges the old and the new together. What's known as TTV, or "Through the Viewfinder" photography, is a process of using a macro lens and shooting through a bubble-glass viewfinder found in a number of twin lens reflex (TLR) cameras. The most common camera used in TTV is the Kodak Duoflex which was produced in the late 1940's. The camera's are still readily available in online marketplaces like eBay, and can sometimes be found in old camera shops, pawnshops, or even the odd dusty attic. Most sell for around $20-30, but if you sweet talk the right person you can probably get yours for free like I did.

Overall, the hardware needs are pretty simple:
  • DSLR with Macro Lens (point and shoots are also rumored to work too, and of course, so are film SLRs if you're so inclined).
  • Opaque Cardboard Tunnel - commonly made from cereal boxes and such
  • TLR Camera with Bubble-Glass Viewfinder
As TTV photography is practiced by many, it's not hard to find a number of methods for constructing the cardboard tunnel. Of all that I've seen, by far the best is a template designed by Russ Morris. He's even been so kind as to provide a downloadable pdf of the template he uses.In addition to the pdf, Russ provides a lot of valuable information on his website for those looking to get started with TTV. You will surely save yourself a lot of trial and error by reading the lessons he has learned.

If you have a hard time tracking down a Kodak Duaflex, have a look at some of these alternatives. A variety of cameras exist that will work just fine for TTV photography, but you may have to tinker around a bit to get everything to work just right. Essentially, the setup remains the same for all cameras. The macro lens is inserted into the top end of the tunnel, while the TLR camera is inserted in the bottom.The length of the tunnel must be sized in accordance to the focusing distance of your lens. Gaffer tape will help you stop light from entering any of the creases/folds. Also, you may need to line the inner areas with foam to completely block any remaining light leaks. Once your system, is sturdy and sealed, then you're good to start shooting. A bit of fine tuning may be required as you go, but that's just the nature of photography.

Here's a shot taken by my good friend Patrick Schmidt - he's accumulated a number of quality TTV shots that are publically viewable on his flickr photostream. I highly suggest having a browse if you're looking for a bit of inspiration to get started.
Additionally, have a look at some of the other TTV groups on Flickr if you're interested in seeing some alternative approaches (samples: A, B, C, D). All in all, this is a very inexpensive style of photography to attempt, especially if you already own a macro lens. Albeit, it's geared towards photographic artists, but like all media production mediums, it never hurts to experiment.

Sunday, January 4, 2009

GPS Meets Photography

In the 37 years since GPS prototypes were first tested by the US Air Force it's grown into a fundamentally important technology in a variety of consumer-based applications. More recently, GPS has become increasingly significant in photographic mapping technology. Currently, much of the data that exists results from manually "pinpointing" photo locations on a virtual map. However, new technology poses to change this labor intensive process by automatically saving geo-locations in a photo's metadata.

For the general public, the unobtrusive GPS addition has already been implemented into models like Nikon's Coolpix P6000. For, SLR shooters, the Nikon GP-1 provides an optional accessory for those interested in obtaining this data. Modifications to SLR camera bodies have already begun in an effort to accommodate the new unit designed to plug into an input on the side while locking into the camera's hotshoe. The folks at Digital Review got their hands on a tester if you're interested in learning more about the GP-1.


So how does this new GPS technology affect photography?

For starters, geo-tags provide vital information to image mapping software which is currently being developed by a number of the world's most powerful technological players. I spoke about these movements in a previous post. Searching for photos on a virtual map is becoming increasingly popular and is offered by a variety of sites around the web. Automated geo-tags will lead to more mappable images, with less work required from the user's end.

For photographers on the prowl for their next hot location, the sharing of geo-coordinates make it easier than ever before to find your way to the place where some of your favorite shots were taken. For example, wouldn't it be cool to backpack to some of the precise locations where Ansel Adams took some of his most popular photos - too bad he didn't have GPS technology in some of those old, large-format beasts.

In real estate marketplaces, agents can benefit from this new technology by automatically geo-locating their listings. Almost all real estate sites offer some sort of map search, but currently the information must be entered manually. Perhaps as the technology grows an automated geo-location system can be implemented into current MLS databases that pulls information from the listing photos and applies it to the listing itself.

Furthermore, historians and geographers can benefit by accurately gauging change to landmarks and landscapes. Having exact times, dates, and locations embedded into your photos allows people living generations from now to use the photos as a historical reference.

In the coming years, I expect to see this technology grow and become more tightly integrated with the camera systems as they are right off the shelf. The "optional accessory" will only last for a generation or two at best before being integrated into camera itself. As one who loves to travel and shoot in places around the world, I look forward to this new technology and am excited to see how it grows.