Sales
0161 215 3700
0800 458 4545
Support
0800 230 0032
0161 215 3711
Fast Chat

Welcome to UKFast, do you have a question? Our hosting experts have the answers.

Sarah Wilson UKFast | Account Manager

What's standing at Intel's platform?

What's standing at Intel's platform?

Slowly but surely the standard tasks of the developer’s basic daily grind are being absorbed and packaged up by a growing number of vendors.

For example, systems management tools vendors have already subsumed much of the management coding that would in the past have been the developer’s lot, and now Intel is casting its beady eye on the potential from the other end of the spectrum.

The company has been integrating large amounts of PC real-estate into the processor itself, or the associated chipset, for some time. Functions such as the graphics controller is just one obvious example. But now it is looking at what constitutes a `server’ and starting to identify that functionality as targets it can integrate into its own architectures.

It has already integrated virtualisation into the processor with the new VT technology, and has recently also added power management. The next target, due to be implemented into the Dempsey dual-core Xeon DP processor, is Active Management Controller, a module capable of monitoring performance and similar factors that collectively sum up the `health’ of the processor.

According to Kirk Skaugen, VP of Intel’s Server Platforms Group, the company is working with the close collaboration of the mainstream information management systems vendors such as IBM, HP, BMC and CA, as well as Symantec, LANDesk and Novell so that they can all interoperate with the on-chip functionality.

Also expected to appear soon is I/O Acceleration Technology (I/OAT) designed to significantly boost TCP/IP performance, and Skaugen indicated that other targets for integration are under development and scrutiny. Indeed, they will form integral parts of what he called a Formal Usage Model for what the company now calls its server platform, which will incorporate such functionality as dynamic provisioning and services and node configuration.

All of this follows a pattern set out by Intel’s law-meister, Gordon Moore, many years ago. Speaking at the 1979 International Solid State Circuits Conference in Philadelphia, he observed that as device complexity increases the number and diversity of functions possible on a chip also increase. The danger with this, of course, is that it is all too easy to end up with an all-singing, all-dancing device that is so complex it does not fit the requirements of any server vendor.

But targeting increasing amounts of low-level, commonly used functionality has the potential to not only increase the value and margin of each processor, but also increase the dependence of users on the device. A `Formal Usage Model’ will inevitably be a two-edged sword for developers, especially as it grows, for they will have to be ready to grow with it if it is successful. If it does succeed, it will have the effect of creating a new `baseline’ of services and functionality to which developers will have to work. This could have the distinct advantage of effectively standardising a growing range of common functions that will no longer need to be in the developer’s standard repertoire of coding skills. In turn, they will be freed the to start applying their talents at the next level of abstraction in applications and systems development.

But if Intel fails in making this work – either by picking the wrong functionality or by integrating too much functionality too soon, developers may well find those old skills will still be needed after all.


print this article

Return to internet news headlines
View Internet News Archive

Share with: