CPU
CPU is a well-known acronym in the computing world, but what is in them? Learn more about CPUs, including the differences between Pentium and Celeron processors, or how graphics cards work.
Can computer chips be air-conditioned?
It takes a pretty big AC unit to lower the temperature of your entire house or apartment. And while you may not think about keeping your computer cool, it can overheat, too. Is there an air conditioner small enough to cool a computer chip? See more »
During the hot months of summer, many people stay indoors in order to avoid sweat and sunburns, enjoying the cool comforts of an air-conditioned apartment or house instead. While a room without any cooling system would feel stuffy and uncomfortable, air conditioners provide us with comfortable, 70-degree temperatures.
The air conditioner in your living space works just like the refrigerator in your kitchen -- it uses similar liquids, gases and cooling systems to create cooler temperatures. Instead of simply circulating air around like a fan, both technologies work by actually removing heat from a specified area. A compressor compresses the cool gas known as a refrigerant, causing it to become hot. The hot gas runs through a set of hot coils and condenses into a liquid until it reaches an expansion valve. The valve turns the liquid back into a cool gas by evaporating it; the gas then runs through another set of coils. This second set of cooled coils, facing the area that needs to be air-conditioned, absorbs any warm air to cool down an apartment -- or refrigerator.
The big difference, of course, is that the cooling system in your refrigerator is a small, enclosed box. Once closed, the door traps cool air inside to keep food and drinks fresh for long periods of time. An apartment's air conditioner, on the other hand, is responsible for cooling a much larger space. The walls and doors of the apartment act like the refrigerator door, keeping the cool air from escaping.
But what if engineers took the technology used in air conditioners and applied it to a much smaller scale -- a micro scale, for instance? Scientists working at the Purdue University of Mechanical Engineering, led by Professor Issam Mudawar, are developing an experimental system that takes cooling techniques from air-conditioning systems to cool down small, hard-working computer chips.
¬How does an air-conditioned computer chip work, especially on such a small scale? Will you soon find air-conditioning systems in personal computers, or do computers even get hot enough to require such an efficient technology? If not, what kinds of computer chips actually need to be air-conditioned?
EUVL Chipmaking
Silicon microprocessors are about to reach the limit to their storage capacity. But one technology may extend the life of the silicon microchip -- it's called extreme-ultraviolet lithography, and it may keep silicon useful for a few years longer.
See more »
¬Silicon has been the heart of the world's technology boom for nearly half a century, but microprocessor manufacturers have all but squeezed the life out of it. The current technology used to make microprocessors will begin to reach its limit around 2005. At that time, chipmakers will have to look to other technologies to cram more transistors onto silicon to create more powerful chips. Many are already looking at extreme-ultraviolet lithography (EUVL) as a way to extend the life of silicon at least until the end of the decade.
The current process used to pack more and more transistors onto a chip is called deep-ultraviolet lithography, which is a photography-like technique that focuses light through lenses to carve circuit patterns on silicon wafers. Manufacturers are concerned that this technique might soon be problematic as the laws of physics intervene.¬
¬ Using extreme-ultraviolet (EUV) light to carve transistors in silicon wafers will lead to microprocessors that are up to 100 times faster than today's most powerful chips, and to memory chips with similar increases in storage capacity. In this article, you will learn about the current lithography technique used to make chips, and how EUVL will squeeze even more transistors onto chips beginning around 2007.
Graphics Cards
The images you see on your monitor are made of tiny dots called pixels. At most resolution settings, a screen displays over a million pixels, and the computer has to decide what to do with every one in order to create an image. How do graphics cards
The images you see on your monitor are made of tiny dots called pixels. At most common resolution settings, a screen displays over a million pixels, and the computer has to decide what to do with every one in order to create an image. To do this, it needs a translator -- something to take binary data from the CPU and turn it into a picture you can see. Unless a computer has graphics capability built into the motherboard, that translation takes place on the graphics card.
A graphics card's job is complex, but its principles and components are easy to understand. In this article, we will look at the basic parts of a video card and what they do. We'll also examine the factors that work together to make a fast, efficient graphics card.
¬
Think of a computer as a company with its own art department. When people in the company want a piece of artwork, they send a request to the art department. The art department decides how to create the image and then puts it on paper. The end result is that someone's idea becomes an actual, viewable picture.
A graphics card works along the same principles. The CPU, working in conjunction with software applications, sends information about the image to the graphics card. The graphics card decides how to use the pixels on the screen to create the image. It then sends that information to the monitor through a cable. ¬
Creating an image out of binary data is a demanding process. To make a 3-D image, the graphics card first creates a wire frame out of straight lines. Then, it rasterizes the image (fills in the remaining pixels). It also adds lighting, texture and color. For fast-paced games, the computer has to go through this process about sixty times per second. Without a graphics card to perform the necessary calculations, the workload would be too much for the computer to handle.
The graphics card accomplishes this task using four main components:
• A motherboard connection for data and power
• A processor to decide what to do with each pixel on the screen
• Memory to hold information about each pixel and to temporarily store completed pictures
• A monitor connection so you can see the final result
Microprocessors
The microprocessor determines the processing power available for any application you run -- without it, there IS no computer. Learn all about this amazing, ever-shrinking technology that makes your computer compute.
microprocessor to do its work. The microprocessor is the heart of any normal computer, whether it is a desktop machine, a server or a laptop. The microprocessor you are using might be a Pentium, a K6, a PowerPC, a Sparc or any of the many other brands and types of microprocessors, but they all do approximately the same thing in approximately the same way.
A microprocessor -- also known as a CPU or central processing unit -- is a complete computation engine that is fabricated on a single chip. The first microprocessor was the Intel 4004, introduced in 1971. The 4004 was not very powerful -- all it could do was add and subtract, and it could only do that 4 bits at a time. But it was amazing that everything was on one chip. Prior to the 4004, engineers built computers either from collections of chips or from discrete components (transistors wired one at a time). The 4004 powered one of the first portable electronic calculators.
¬ If you have ever wondered what the microprocessor in your computer is doing, or if you have ever wondered about the differences between types of microprocessors, then read on. In this article, you will learn how fairly simple digital logic techniques allow a computer to do its job, whether its playing a game or spell checking a document!
small can CPUs get?
Advances in technology have allowed microprocessor manufacturers to double the number of transistors on a CPU chip every two years. How long can they keep this up?
During the 20th century, inventors created devices that we regularly depend upon. Arguably, one of the most important inventions was the transistor. Developed in 1947 by engineers working for Bell Laboratories, the original purpose of the transistor was to amplify sound over phone lines. The transistor replaced an older technology -- vacuum tubes. The tubes weren't reliable, they were bulky and they generated a lot of heat, too.
The first transistor was a point-contact transistor that measured half an inch (1.27 centimeters) in height. The transistor wasn't very powerful, but physicists recognized the potential of the device. Before long, physicists and engineers began to incorporate transistors into various electronic devices. And as time passed, they also learned how to make transistors smaller and more efficient.
In 1958, engineers attached two transistors to a silicon crystal and created the world's first integrated circuit [source: Intel]. In turn, the integrated circuit paved the way to the development of the microprocessor. If you compare a computer to a human being, the microprocessor would be the brain. It makes calculations and processes data.
By the 1960s, computer scientist (and Intel co-founder) Gordon Moore made an interesting observation. He noticed that every 12 months, engineers were able to double the number of transistors on a square inch piece of silicon. Like clockwork, engineers were finding ways to reduce the size of transistors. It's because of these small transistors that we have electronic devices like personal computers, smartphones and mp3 players. Without transistors, we would still be using vacuum tubes and mechanical switches to make calculations.
¬Since Moore's observation, the shrinking trend has continued. But it hasn't kept up with the pace Moore observed. These days, the number of transistors doubles every 24 months. But that raises an interesting question: How small can transistors -- and by extension, CPUs -- get? In 1947, a single transistor measured a little over one-hundredth of a meter high. Today, Intel produces microprocessors with transistors measuring only 45 nanometers wide. A nanometer is one-billionth of a meter!
Intel and other microprocessor manufacturers are already working on the next generation of chips. These will use transistors measuring a mere 32 nanometers in width. But some physicists and engineers think we might be bumping up against some fundamental physical limits when it comes to transistor size.
the Nehalem Microprocessor Microarchitecture
Like clockwork, microprocessor manufacturers develop new and better chips to power our computers. What makes Intel's Nehalem chip so different?
Take the number two and double it and you've got four. Double it again and you've got eight. Continue this trend of doubling the previous product and within 10 rounds you're up to 1,024. By 20 rounds you've hit 1,048,576. This is called exponential growth. It's the principle behind one of the most important concepts in the evolution of electronics.
In 1965, Intel co-founder Gordon Moore made an observation that has since dictated the direction of the semiconductor industry. Moore noted that the density of transistors on a chip doubled every year. That meant that every 12 months, chip manufacturers were finding ways to shrink transistor sizes so that twice as many could fit on a chip substrate.
Moore pointed out that the density of transistors on a chip and the cost of manufacturing chips were tied together. But the media -- and just about everybody else -- latched on to the idea that the microchip industry was developing at an exponential rate. Moore's observations and predictions morphed into a concept we call Moore's Law.
Over the years, people have tweaked Moore's Law to fit the parameters of chip development. At one point, the length of time between doubling the number of transistors on a chip increased to 18 months. Today, it's more like two years. That's still an impressive achievement considering that today's top microprocessors contain more than a billion transistors on a single chip.
¬Another way to look at Moore's Law is to say that the processing power of a microchip doubles in capacity every two years. That's almost the same as saying the number of transistors doubles -- microprocessors draw processing power from transistors. But another way to boost processor power is to find new ways to design chips so that they're more efficient.
¬This brings us back to Intel. Intel's philosophy is to follow a tick-tock strategy. The tick refers to creating new methods of building smaller transistors. The tock refers to maximizing the microprocessor's power and speed. The most recent Intel tick chip to hit the market (at the time of this writing) is the Penryn chip, which has transistors on the 45-nanometer scale. A nanometer is one-billionth the size of a meter -- to put that in the proper perspective, an average human hair is about 100,000 nanometers in diameter.
So what's the tock? That would be the new Core i7 microprocessor from Intel. It has transistors the same size as the Penryn's, but uses Intel's new Nehalem microarchitecture to increase power and speed. By following this tick-tock philosophy, Intel hopes to stay on target to meet the expectations of Moore's Law for several more years.
Is it true that the Mac G4 processor is twice as fast as a Pentium III?
In this article we'll tell you which processor is faster and why. Learn how chip designers make use of transistors.
See more »
It is true that the G4 is faster than the Pentium III on many tasks. For example, if you run the SETI@home screensaver (which uses lots of floating-point calculations to perform signal processing operations on radio telescope data), a G4 running at 500 megahertz (MHz) will produce a result set in about half the time of a Pentium III running at 700 MHz. This is a remarkable difference in processing capability.
When creating a microprocessor, the designer gets to make millions of decisions. A basic limit in the design is the number of transistors that will fit on a chip, so the designer is trying to make decisions that obtain the best performance from those transistors. The designer may also have to worry about backward compatibility with older instruction sets and looming release dates.
For example, the Intel 8080 processor took something like 80 clock cycles to multiply two 8-bit numbers. It took so long because the number of transistors was severely limited at the time the 8080 was released. Today's processors can often multiply two pairs of 32-bit numbers in a single clock cycle. The difference between then and now is the number of transistors -- a greater number of transistors allows more to happen in a single clock cycle.
If you look at Motorola's documentation, it says that the G4 processor features:
...a high-frequency superscalar PowerPC core, capable of issuing three instructions per clock cycle (two instructions + branch) into seven independent execution units:
• Two integer units
• Double-precision floating-point unit
• Vector unit
• Load/store unit
• System unit
• Branch processing unit
The¬se execution units feed off of a 128-bit internal bus. The feature that gives the G4 most of its speed in SETI@home processing is the double-precision floating-point unit. The G4 can complete one double-precision calculation every clock cycle, while the Pentium III cannot.
The G4 also features an interesting vector processing unit. Applications must be specially coded to take advantage of the vector processor, which allows them to perform certain mathematical operations very quickly. A vector processor executes the same operation on multiple pieces of data at the same time. In the G4, up to eight simultaneous operations can execute in a single clock cycle in the vector unit. This sort of processing power is what makes the G4 so fast when working with math-intensive applications like Photoshop that have been coded to take advantage of vector processing. The Pentium III features a vector processing capability as well, but it is not as powerful.
Tech Talk: CPU Quiz
When we sit at our computers, browsing the Internet, playing video games and running word processors, it's easy to feel like you're in control. But have you ever thought about what really makes your desktop or laptop run?
What is computing power?
When people speak of supercomputers, they often talk about how powerful the machines are. But just what is computing power, and what makes one type of machine more powerful than another?
What makes a supercomputer so super? Can it leap tall buildings in a single bound or protect the rights of the innocent? The truth is a bit more mundane. Supercomputers can process complex calculations very quickly.
As it turns out, that's the secret behind computing power. It all comes down to how fast a machine can perform an operation. Everything a computer does breaks down into math. Your computer's processor interprets any command you execute as a series of math problems. Faster processors can handle more calculations per second than slower ones, and they're also better at handling really tough calculations.
Within your computer's CPU is an electronic clock. The clock's job is to create a series of electrical pulses at regular intervals. This allows the computer to synchronize all its components and it determines the speed at which the computer can pull data from its memory and perform calculations.
When you talk about how many gigahertz your processor has, you're really talking about clock speed. The number refers to how many electrical pulses your CPU sends out each second. A 3.2 gigahertz processor sends out around 3.2 billion pulses each second. While it's possible to push some processors to speeds faster than their advertised limits -- a process called overclocking -- eventually a clock will hit its limit and will go no faster.
As of March 2010, the record for processing power goes to a Cray XT5 computer called Jaguar. The Jaguar supercomputer can process up to 2.3 quadrillion calculations per second [source: National Center for Computational Sciences].
Computer performance can also be measured in floating-point operations per second, or flops. Current desktop computers have processors that can handle billions of floating-point operations per second, or gigaflops. Computers with multiple processors have an advantage over single-processor machines, because each processor core can handle a certain number of calculations per second. Multiple-core processors increase computing power while using less electricity [source: Intel]
Even fast computers can take years to complete certain tasks. Finding two prime factors of a very large number is a difficult task for most computers. First, the computer must determine the factors of the large number. Then, the computer must determine if the factors are prime numbers. For incredibly large numbers, this is a laborious task. The calculations can take a computer many years to complete.
Future computers may find such a task relatively simple. A working quantum computer of sufficient power could calculate factors in parallel and then provide the most likely answer in just a few moments. However, quantum computers have their own challenges and wouldn't be suitable for all computing tasks, but they could reshape the way we think of computing power.
What is the difference between a Pentium and a Celeron processor?
When you sort things out and compare the two chips side by side, it turns out that a Celeron and a Pentium 4 chip running at the same speed are different beasts. You should choose a chip based on how you use your computer.
See more »
¬ Here are the most important similarities and differences between the Pentium 4 and the Celeron chips coming out today:
• Core - The Celeron chip is based on a Pentium 4 core.
• Cache - Celeron chips have less cache memory than Pentium 4 chips do. A Celeron might have 128 kilobytes of L2 cache, while a Pentium 4 can have four times that. The amount of L2 cache memory can have a big effect on performance.
• Clock speed - Intel manufactures the Pentium 4 chips to run at a higher clock speed than Celeron chips. The fastest Pentium 4 might be 60 percent faster than the fastest Celeron.
• Bus speed - There are differences in the maximum bus speeds that the processors allow. Pentium 4s tend to be about 30 percent faster than Celerons.¬
¬When you sort all this out and compare the two chips side by side, it turns out that a Celeron and a Pentium 4 chip running at the same speed are different beasts. The smaller L2 cache size and slower bus speeds can mean serious performance differences depending on what you want to do with your computer. If all you do is check e-mail and browse the Web, the Celeron is fine, and the price difference can save you a lot of money. If you want the fastest machine you can buy, then you need to go with the Pentium 4 to get the highest clock speeds and the fastest system bus.
Why are there limits on CPU speed?
A microprocessor will perform without error when executed at or below the maximum indicated speed. Why can't they speed them up? There are two things that limit a chip's speed.
See more »
¬¬When you buy a CPU chip, it has a "maximum" speed rating stamped on the chip's case. For example, the chip might indicate that it is a 3-GHz part. This means that the chip will perform without error when executed at or below that speed within the chip's normal temperature parameters.
There are two things that limit a chip's speed:
• Transmission delays on the chip
• Heat build-up on the chip
Transmission delays occur in the wires that connect things together on a chip. The "wires" on a chip are incredibly small aluminum or copper strips etched onto the silicon. A chip is nothing more than a collection of transistors and wires that hook them together, and a transistor is nothing but an on/off switch. When a switch changes its state from on to off or off to on, it has to either charge up or drain the wire that connects the transistor to the next transistor down the line. Imagine that a transistor is currently "on." The wire it is driving is filled with electrons. When the switch changes to "off," it has to drain off those electrons, and that takes time. The bigger the wire, the longer it takes.
As the size of the wires has gotten smaller over the years, the time required to change states has gotten smaller, too. But there is some limit -- charging and draining the wires takes time. That limit imposes a speed limit on the chip.
There is also a minimum amount of time that a transistor takes to flip states. Transistors are chained together in strings, so the transistor delays add up. On a complex chip like the G5, there are likely to be longer chains, and the length of the longest chain limits the maximum speed¬ of the entire chip.
Finally, there is heat. Every time the transistors in a gate change state, they leak a little electricity. This electricity creates heat. As transistor sizes shrink, the amount of wasted current (and therefore heat) has declined, but there is still heat being created. The faster a chip goes, the more heat it generates. Heat build-up puts another limit on speed.
You can try to run your chip at a faster speed -- doing that is called overclocking. On many chips (especially certain models of the Celeron, it works very well. Sometimes, you have to cool the chip artificially to overclock it. Other times, you cannot overclock it at all because you immediately bump into transmission delays.
0 comments:
Post a Comment