Pages

Monday 11 May 2015

Storage Media

Game Storage Media

There are many ways to store and play video games, across a range of different mediums. I will cover the most important and widely-used in this section.


  • Cartridges
Cartridges were the first type of console video game storage media. Cartidges are expensive to manufacture, meaning that they were eventually taken over by cheaper means.


  • Flash Memory
Flash memory is a relatively new technology, allowing the user to store games and game saves on a very small flash memory card, such as an SD card or a micro SD card. Examples of consoles which use this are the 3DS and the PSP. A drawback of flash memory is the small size capacities.

  • DVD
DVDs are no doubt the most common game storage media of this age. The highest storage a DVD can have is around 8.7GB, so with the way that games are made these days, at such a big size, DVDs don't really cut it anymore. This means that most games that are released on DVD format these days, have to have multiple discs. GTA 5 on PC is spanned over 7 discs!

  • Blu-Ray
So, with DVDs becoming less convenient, Blu-Ray discs are becoming more of a solution. Blu-Ray discs can store huge amounts on, for example, on the Xbox One, they can fit 50GB onto a single Blu-Ray disc. Blu-Ray disc readers are becoming ever more cheaper for PCs too, so it does seem that there will be a take over in the near future!

  • Digital Downloading
Digital downloading is the most convenient way to store games. The users can download the video game to their hard drive, and play it straight from there. This is heavily reliant on how much bandwidth the users have, and hard drive storage, as you can't download games without bandwidth or storage. Most modern consoles allow this through their marketplace systems, and PCs do also, most conveniently through software such as Steam and Origin.

Piracy is still an ever-battled crime in the video game world. Piracy is where a game is illegally copied and played. It has been a problem for decades in the game industry, and is still continuing to be, it is arguable worse now than it has ever been. Some DRM (digital rights management) is put in place on discs, but pirates always find a way to crack them. Some games have anti-piracy code, for example, Alan Wake (the game) gives the player an annoying eyepatch to hinder their gameplay!

Anyhow, that is all from me for this article, please stay tuned for future updates!

Matt :D




Sunday 10 May 2015

Memory, Sound and Display Technologies

Memory, Sound and Display Technologies

This part of the article will be about the different technologies that exist for memory, sound and displays. 


Memory
Memory is the most important component in a computer, without any memory in a computer, no processes would be able to run effectively, as even the CPU has its own memory (cache). Computers will also have separate memory modules (RAM), which store information that is being processed by the CPU. This saves the CPU from reading from the hard drive constantly, which would result in the computer operating very slowly. As it is random access memory (RAM), it is taken when it is required by the system.

Here is a diagram to show the effectiveness and price of different forms of memory.


Types of Memory

There are different types of memory, used for handling different processes inside a computer.

RAM is the most heard-of type of memory in a computer, as I aforementioned, it is accessed randomly, as it handles processes out-of-order. RAM isn't only found in consoles and computers, it is also found in many electrical devices, such as printers, to store things such as upcoming print jobs. The information stored in ram is flushed out when the power to it is lost, making it temporary.

There are a few main types of RAM:


  • DRAM
DRAM is the most common type of RAM, as it cheaper to produce. DRAM stands for dynamic RAM. Dynamic RAM is dynamic, because it gets refreshed thousands of times per second. It refreshes so much because the chips are made of lots of transistors and capacitors, so the capacitors leak information, thus needing to be refilled by the CPU.

  • SRAM
SRAM is static RAM, which is different to dynamic RAM, as it does not need to be refreshed, therefore making it more expensive to produce. As it does not need to be refreshed, it makes it much faster RAM too. SRAM is only used in parts of a computer where it needs to be fast, as most computers and consoles will use a combination of both SRAM and DRAM. As SRAM does not need to be refilled, it stops the need for the CPU to fill it with information again. 


  • RDRAM
RDRAM is an abbreviation of Rambus Dynamic RAM. RDRAM is one of the fastest types of RAM available to systems. It works similarly to that of DRAM, but can achieve quicker speeds, due to a higher end data bus. RDRAM requires a specialist motherboard, to utilize its full potential. A lot of consoles utilise RDRAM, the first to do this being the Nintendo 64.

  • SDRAM
Synchronous Dynamic RAM (SDRAM) is another improved type of DRAM, where data is received between sets of memory which removes the delay of receiving data as its split between two RAM modules. SDRAM is also synced with the clock speed meaning that the faster the bus speed is for the memory clock, the faster the RAM will perform loading calculations into the CPU faster and loading programs quicker. As its linked to the clock speed, if the clock speed is increased then the RAM will perform quicker, this is a good addition to have with a computer system that has been over-clocked as not only will the CPU perform quicker the RAM will also benefit from it.



Display


Most displays these days are LCD (Liquid Crystal Display). LCDs work by turning on and off pixels, by using liquid crystals. Light is shone through different liquid crystal filters, which control the light and colour levels, which end up in the form as different coloured pixels. As a pixel is made up of 3 sub-pixels, of blue, green and red, when combined, they can display 16.7 million colours (256 shades of each colour).
The amount of pixels lengths-ways multiplied by the amount up and down is the resolution (e.g 1080x1920 = 2,073,600 pixels)
All LCD displays will have a resolution, for example the New 3DS has a 800x240 resolution (top screen), and the iPad Air 2 has the resolution of 2048x1536. Obviously, the higher the resolution, the clearer and more expensive the display. 
4K is a 'new' technology, that has only been made available for a few months now, which is the resolution 3840x2160. This is obviously a very crisp resolution, but for a console or PC to run this resolution natively in game, it would need very beefy hardware. For PCs, if the player wants to achieve high graphical detail in their games at 4K, they would most definitely need an SLI or CrossFireX configuration (multiple GPUs).



Sound

Before rich and realistic sound for computers and consoles, they could only beep. These beeps could be put together in different pitches to make a very basic tune. Although irritating, there was no alternative in the time. Computers could not change the volume of the beeps though, or make any other sounds.
These beeps started out as a warning symbol (much like the beeps you hear on a PC boot), but did end up being manipulated into basic, non-realistic and quite frankly irritating tunes.

With the evolution of technology, today's games have rich, 3D, highly realistic and cinematic sounds, capable of being multi-layered. You can play pretty much every modern game nowadays in full surround sound, if you own even the cheapest 5.1 or 7.1 speaker setup. Capturing and recording sounds is also made easy and high quality, with sound cards. Sound cards have four main, notable components:


  • Analog-to-digital converter
  • Digital-to-analog converter
  • An ISA (Industry Standard Architecture) or PCI (Peripheral Component Interconnect) interface to connect the sound card to the motherboard.
  • Input and output connections for microphones and speakers.
Most motherboards these days actually have high quality on-board sound chips, capable of surround sound, more often than not even full 7.1 surround.
PCI Interfaces are universal, meaning they are also used for other expansion cards, such as a graphics card, or a USB expansion card.

The way that the ATC (Analog-to-Digital) and the DTA (Digital-to-Analog) converters work, is that a microphone input would take the analog sound of whatever you are recording, and convert it to a digital file, and if you are playing a digital sound file from a computer, it will be converted to analog sound. An analog sound is recorded by taking measurements of the analog wave, and measuring them. These measurements are measured as kilohertz (KHz), and are known as the sampling rate, so the better the sound card, the more samples can be taken, the higher quality the sound is. As an alternative to sound cards, consoles can have an MCP (Media Communications Processor), which works the same as an intergrated sound chip.

It is easy to confuse 3D sound and surround sound. 3D sound is what is recorded into a game, which is what allows the player to pinpoint what is going on in the game, and it moves with the player accordingly. Surround sound refers to the speaker set up, which is common in home theatre systems.








So, that is my section on Memory, Sound and Display technologies current of today, plase stay tuned for more!

Matt :3













Processing Units in Games Platforms

Processing Units in Games Platforms

This part of the article is about the different processing units that are found within computer gaming hardware.


CPU (Central Processing Unit)

Sometimes referred to simply as the central processor, but more commonly called processor, the CPU is the powerhouse behind the computer, where most calculations take place. In terms of computing power, the CPU is the most important element of a computer system.
As the CPU has a large workload, it will get very hot. The way to combat the heat is through a heatsink. A heatsink will always be covering the CPU, often used with a fan to draw out extra heat. Between the CPU and the heatsink will be a layer of thermal paste, which will make the contact between the CPU and the heatsink a lot better, allowing the heat to be dispersed more effectively. If the CPU gets too hot, it will either do damage to it, or burn it out completely.

As standard with most hardware, there is always better performing and worse performing variants. With CPUs, their speed is (nowadays) measured in GHz (gigahertz). 1GHz is the same as 1 billion cycles per second, therefore if a processor has the clock rating of 1.75 GHz (e.g Xbox One CPU), it could complete 1.75 Billion cycles per second. High-end PC CPUs can have clock speeds of 4-5GHz. The clock speed can be increased on an 'unlocked' PC processor, via a process called overclocking. Overclocking is a risky procedure, only to be attempted if it can be sufficiently cooled, as it is easy to fry the processor during the process. Overclocking chips will reduce the lifespan of them though!

In Modern CPUs these days, they can have multiple cores, for example, the Xbox One is octo-core (8 cores). This means that the chip itself will have multiple processing cores, allowing the strain to be shared, making for faster processing speeds. The cores can be allocated to do specific tasks, so if the processor is octo-core, such as that of the Xbox One, it may allow 4 of these cores for the various engines for the games (physics simulations, collision engine etc) and the other cores to be allowed for sound. With the Xbox One, it allows multitasking, and can be used as more of a media device in a sense. At least one of the cores is probably dedicated to the operation of other programs within the Xbox One. As the Xbox One has 8 cores, it has an accumulated clock rate of 14GHz!


CPUs have a 'CPU cache'. This cache is used as random-access memory, that is accessed a lot quicker than any other RAM in a system. The cache is used to remember temporarily the most-used processes and functions. This allows the console to reduce the loading times for some aspects. It is quicker and more effective for the computer to read data from the cache then to generate it again, so that is why it saves a version of the process.



The address/data bus is a group of wires that allow for data to be transferred between different components in a system. The buses are formed of two different parts - the address bus and the data bus. The data bus transfers the actual data itself, whereas the address bus transfers the information about where it should go. An easy way to remember which is which, is that the address bus finds the address of where it's going (like a postal address) and the data bus transfers the data. The width (size) of of a bus determines how much data can actually be transmitted at once, for example, a 16-bit bus can transmit 16 bits of data, and a 32-bit bus can transmit 32 bits of data. The buses have a clock speed similar to that of a CPU, but is measured in megahertz (MHz) rather than GHz. The faster this speed, the faster the applications are enabled to run.


GPU (Graphics Processing Unit)



The GPU of a console is a separate processing chip, that is primarily used to manage and boost the performance of video and graphics. GPU features include:

  • 2-D or 3-D graphics
  • Digital output to flat panel display monitors
  • Texture mapping
  • Application support for high-intensity graphics software such as AutoCAD
  • Rendering polygons
  • Support for YUV color space
  • Hardware overlays
  • MPEG decoding
In simpler terms, these mathematically-intensive features are designed to lessen the work of the CPU and produce faster video and graphics. A GPU is not only used in consoles, but it is also used in PCs, (via graphics card or motherboard integrated graphics) mobile phones, display adapters, and workstations. By freeing the CPU, it allows it to be used for other processes.

As I very briefly mentioned, there are multiple types of GPU available.

  • Dedicated graphics card
For PCs, to achieve the best in graphics processing, you will have to have a dedicated video card. These are an all-in-one package of a heatsink, PCB board (like a graphics card motherboard), GPU chip, and fans. Multiple graphics cards can be used at once, this is an expensive and power-extensive graphical solution, but allows users to run games at super-high graphics at super-high resolutions. for NVIDIA users, the way to do this is through their 'sli' technology, and for AMD users, they have to use CrossFireX. Both of these technologies are essentially the same.


  • Integrated/on-board graphics processor 

Most motherboards have an integrated graphics chip included on them, which can be used without the need of an expansion graphics card. As they are part of the same motherboard, it is easy for them to slow the whole computer while they are in use, making them very undesirable. Most laptops will run from integrated graphics, unless they are gaming laptops with changeable cards.


  • Custom-designed graphics chips
Almost all consoles will have a custom-designed graphics chip. In this case, the Xbox One's GPU is in the same chip as the CPU. Although it is in the same place, it will still do everything that a GPU should do. These GPUs have to be custom-made to deal very specifically with how the Xbox itself handles programs, making it useless in any other system.

There are two main GPU manufacturers on the market - Nvidia and ATI/AMD.

Similarly to CPUs, you get different speeds of GPU, but GPUs are measured in MHz usually. The Xbox One's GPU runs at 853 MHz. You can compare this result to the GPU speed of the Xbox 360, which was 500 MHz. Graphics chips will also have an amount of dedicated video memory (VRAM), which will have a huge affect on the performance of a game, especially if it is very highly detailed and textured. This is the same as on PCs, but as PCs have more open resolutions and frame rates, the VRAM will affect those too. Graphics processors work to display 3D images on a 2D screen, so the more detailed a 3D model,(more polygons, higher textures etc) the harder it will have to work to display it. for a game to be well optimised, the assets have to well-optimised to be run on the hardware. This can be achieved by techniques such as NURBS modelling, which is a way to model using curves, which uses less GPU memory.


Anyway, that is my section on processing units, please do stay tuned for more!

Matt :D










Friday 8 May 2015

HCI (Human Computer Interaction)

HCI (Human Computer Interaction)


The HCI (Human Computer Interaction) is a term used to describe a peripheral used by a human to access the controls of a game. This can be in the form of a controller, remote or other peripheral.

I will refer to a couple of terms throughout the article, so I will cover what they mean  here, so that you understand what I talk about.


  • Ergonomics
The ergonomics of a controller is pretty much how comfortable it is to hold - how well it fits into a pair of hands. A controller with good ergonomic design will be intuitive to the player, they will feel what they think they need to press. The buttons need to be able to be easily pressed in union, and as combinations, and the controller needs to not only be comfortable, but needs to be comfortable for extended periods of time and stress!

  • Button Configuration
The button configuration of a controller is simply the location of the buttons on the controller. The buttons need to be able to be pressed quickly and easily. In most games these days, you can assign buttons to different actions and functions, so a controller needs to tend to all of this/

So, I will now go into the history of the HCI that has been introduced alongside video game consoles.



1970s

In the 1970s, the Atari 2600 was released. The Atari 2600 had multiple input devices, such as the Joystick, the Trakball, paddles and driving controllers. The paddles and driving controllers were essentially the same aesthetically, but each allowed for a different control mechanism designed for each different game. The trakball (image left) was a controller with a spinning ball in the center.


1980s

In the 1980s, the Atari's successor was released - the Atari 5200. 
The 5200 had a joystick controller again, and it also had a revision of the trakball again. The controller was revolutionary at this time, becaue it allowed the user to pause the game that they were playing - a rare novelty of the time, which was a large selling point!

The Nintendo NES was also released in the 1980s, which saw possibly the biggest revolution in controllers - even seen in pretty much all controllers these days. The NES controller featured a directional button pad - up, down, left and right. 


Another feature of the NES controller, was the button layout. a 'start' and 'select' buttons, and the 'A' and 'B' buttons. These also changed the face of controllers for the future to come, as you will see later on.
The Sega Master System was also released in the 1980s. The Master System also featured the very squared-off controllers that the NES had, and also had a very similar button layout.

In the later 1980s, Sega released their Mega Drive/Genesis. This was a console to feature an ergonomic controller - it fits the hands of the player, and was comfortable to hold - for the first time in console history. Of course, this controller also had a very similar button layout to the NES and Master system, with the D-Pad and the buttons on the right.

In my previous posts, I rambled quite a lot about the success of the Game Boy, and of course, I will continue here!
The Game Boy was released in 1989, and it was the best handheld console - hands down. It was made by Nintendo, and featured their same button layout as the NES. If you look at it, it is exactly the same, but the buttons are angled for more comfort. It seems this button layout will be continued by Nintendo for consoles to come.



1990s

In the early 1990s, one of the most recognizable and successful consoles of all time was released - the Playstation. The Playstation had an even more comfortable and ergonomic controller than seen before, it was nice to hold, and had very good button placement. As you see, it had the same D-pad and 'start'/'select' buttons as controllers before had. It also had the same buttons on the right, but two additional.
The Playstation had shoulder buttons, which was a new feature on controllers. Could this feature take off?

The 1990s also saw the release of the Nintendo N64, which had a very odd looking controller indeed. The N64's controller was oddly comfortable, and again, it featured all of the similar buttons from other controllers. Alongside these, it also featured two more buttons on the right, and a basic joystick in the middle. The N64 also had the first vibration functionality, achieved through an add-on pack.

After the N64, Sony released the Playstation Dualshock controller, which had two little thumb-sticks/analog sticks. Other than these, it remained the same as the original PlayStation controller.

In the late 1990s, Sega released their Dreamcast console. This featured a controller with the first ever screen on, although it was very hard to see and utilize, as it was a very basic liquid crystal display.

2000s Onwards!

In the very early 2000s, Microsoft took a shot at the market with their Xbox. Their controller had thumb-sticks too, but in a much more ergonomic place. Controllers from this point on became much more comfortable to hold, and they began to nail button placements.

Nintendo also released a console in the early 2000s, which was their GameCube. The GameCube controller had thumb-sticks in the same place as the Xbox, and featured a very similar button layout to the Xbox controller too.

In the early 2000s, Sony released their PlayStation 2. Although there was nothing revolutionary about their controller as such, they also released their EyeToy. The EyeToy was the first motion-controlled peripheral in games consoles. It was a little camera that you would stand in front of, and interact with on-screen graphics.

Sony and Nintendo released handheld consoles in the early 2000s - the PSP and the DS. The Nintendo DS featured a touch-screen input, which was a big feature at the time, whereas the PSP had a high-resolution screen, although not touchscreen. both of these consoles were revolutionary in their own different ways.

2005 saw the release of arguably the best controller of all time. In my opinion, the Xbox 360's controller was perfect. It fits hands exceptionally well, and was just incredibly comfortable to hold for lengths of time.
The Xbox 360 also had wireless controllers, which were made very easy to connect and use, which was a new feature of the time.

Sony's PlayStation 3 controller looked pretty much exactly the same as the PlayStation and the PlayStation 2's controller, although the PS3's controller had wireless functionality, and a gyroscope in (used with certain games), called SIXAXIS.

Nintendo released their Wii in 2006, which had a very state-of-the-art controller, being a motion controller, and fully wireless at the same time. The Wii also had an optional game pad controller, if you wanted to plug it in.


Nintendo released their third revision of the DS in 2008. This was the DSi. The DSi featured a front-facing camera, and a revised shape. The DSi was a big game-changer, as it featured this camera.

Alongside the release of the game Mario Kart Wii, a Wii remote accessory was released - the Wii Wheel. The Wii wheel was possibly the dumbest gimmick that pretty much every Wii owner bought into. They were relatively expensive for a plastic shell, and they didn't really allow the player to do anything different. Even though they were relatively gimmicky, they worked, and they did sell very well for Nintendo. Hats off to them.

In 2009, Nintendo released yet another Wii accessory. It is apparent by now that it is
Nintendo's style to release something and polish it over the years to saturate its sales. In 2009, they released the Wii MotionPlus. The MotionPlus was a little add on to the controller, that was literally just a gyroscope. The gyroscope allowed players to add things such as topspin and backspin to a game of virtual tennis and so on.

In 2009, Sony released possibly the most forgotten about revision of the PSP of all time. The PSP Go was bad in sales, but it did offer very comfortable controls and button layout.

by 2010, Microsoft took a shot at the motion-control market. They released their Kinect. The Kinect did sell well, as it was bundled with consoles a lot, but people did not really get on with it well. The Kinect was reported to just not work well on the 360, and even when it did, its not really what people wanted. Maybe this is because the majority of people that owned an Xbox 360 were serious gamers, and motion controls worked better in a family console, such as the Nintendo Wii?

In 2011, Nintendo released a whole new revision to their DS, it was the 3DS. The 3DS offered a semi-three-dimensional experience to the user, which was a huge unique selling point. It did sell very well for this. The 3DS was also the first DS to have a thumb-stick. Nintendo also released a case for the 3DS which allowed the user to use a second thumb-stick.


New-3DS-XL-Black.jpgThis year (2015) saw the release of Nintendo's newest revision of the DS. It is the New 3DS. The New 3DS has a few refinements made to the body of it, but its new selling point is the 'stable 3D', which is a much better 3D display, where the player doesn't have to keep their head in a specific place to see the 3D parts of the display.
Wii U and GamePad.jpg


In 2012, Nintendo released a new Wii console - The Wii U. The Wii U's controller had a large screen on, where the player can mirror the display. The rest of the design actually resembles the 3DS (with circle pad case) in my opinion, even if not, it definitely stays true to the usual Nintendo button layouts.


In 2013, Microsoft released the Xbox One. The Xbox One's controller is very much similar to the Xbox 360's controller, with pretty much the same button layout and features. I own an Xbox One and an Xbox 360, and I can say that the Xbox One's controller has its pros and cons. The Xbox One's controller has very nice to use thumb-sticks, with ridges on around the perimeter of them for more grip, it also has rumble triggers, which I believe gives much better Haptic feedback to the player. It also has a D-pad which actually works (compared to that of the 360 controller). The features that annoy me about this controller, is that he guide button is in an awkward place to press, and the select and start buttons have been renamed and have become confusing.
Also in 2013, Sony released the PS4. The PlayStation 4's controller had a lot of new and innovative features, such as the touch pad, and the light bar at the back (for motion controls). A bad thing about the controller, is how 'flimsy' it can be. The controller's thumb-stick rubber covers come off really easily.






With many of these controllers, the next revision of them aren't too different. The company will conduct a UCD survey (User Centered Design), where they will ask a group of people how they felt about their previous controller, and what functionalities need building on, and if there are any new features that they would desire. By conducting UCD, the companies can get a very polished and improved controller for their next console. The success of a console is very much reliant on the ergonomics of a controller. A UCD survey would need to take into consideration many different factors, such as:


  • What the gamer has to do.
  • Can the controller tend to all of the needs of different genres of game?
  • Who will be the main audience of the controller.
  • What do gamers need from the controller?
A successful UCD test will be taken on a range of different people, with a range of different interests, game playing styles, and video gaming experience.

The portability is a big factor to consider with controllers too, if a controller is wireless, it has to connect to a different console with ease, it is easy to get impatient if you're waiting to play a game for ages, because your controller won't connect. A controller has to be able to be easily disconnected, and taken to another place. The portability of a handheld console is also important, the batter life of a portable console also has to be taken into consideration.


So, that is my section on HCI, please stay tuned for the rest of this article :)

Matt

Intro - Part 2


Unit 20 - Part 2: Current Hardware Technologies

Here is the second part of my Unit 20 assignment. In this section, I will investigate and evaluate:

  • Human Computer Interface developments for video games platforms.
  • The processing units (GPUs and CPUs) for video games platforms.
  • Memory, display and sound technologies for video game platforms.
  • Video game storage mediums.
So, I shall begin with my section on HCI, please stay Tuned :D

Matt.