Intel Developer Forum Spring 2004 - Wrapup
by Derek Wilson on February 23, 2004 8:44 PM EST- Posted in
- Trade Shows
The final day's keynote is always a thought provoking experience. This is the time during the forum where Intel looks deep into its R&D labs and gives us a little glimpse of what the future holds. We heard from Sean Maloney, VP and GM of Intel's Communications Group, and Pat Gelsinger, the CTO of Intel, on all the latest and greatest ideas Intel is focusing on.
In addition to the final day's keynote, this wrap up will take a look at the floor of the Technology Showcase. We will also be looking a little more in depth at what exactly is going on with PCI Express, ATI, and NVIDIA.
We are still reading through documents and doing research on Intel's x86-64 extensions, though there isn't any more news we can bring you at this moment. As with other processors technologies, when x86-64 is finally enabled (when Nocona launches), we will have an in depth analysis of the architecture enhancements.
Broadband Wireless Technology
Back in the days of the original Quake, average users first realized that their computer just wasn't fast enough. In response, processors, graphics cards and systems were pushed to run games very well. Even still, games are the applications that tend to push users sytems to their limits. Sean Maloney pinpoints broadband as the next area that will push computers to their limit. As broadband wireless becomes a reality, portable wide pipes will push PDAs and other devices to actually use the data to which they have access.
In looking at future technology to push portable devices, Intel is targeting key areas that are current bottlenecks with portable systems. Their first announcement of the keynote was of a 90nm NOR Flash Memory device intended to help speed up the normally slow memory used. Sean then ran a demo of a portable visualization technology (codenamed Carbonado) that can play full motion video and push quite a few polygons/second to run 3D games at smooth frame rates. At this rate, we may have to expand our graphics coverage to include cell phone GPUs.
Unfortunately, Sean didn't want to talk much about their radio enhancements (indicating that the next IDF might lend a little more information in this area). He did indicate that Intel is exploring MEMS systems for use in radios.
The success or failure of products using these technologies depends heavily on the availability of wireless broadband and pervasive networking. Intel isn't going to leave those technologies alone either. We saw a demo of Xilinx's implementation of the recently finalized AS interconnect standard. In addition, Intel is working on 10Gbps and 1Gpbs network switch silicon (90nm of course), 4Gbps optical transceivers (due out 2h '04), and even a 10 gigabit PCI-X ethernet card. Sean was also very happy with the current push toward 802.16 and WiMAX. One of the most interesting numbers Intel threw out is that they expect 802.16e (portable WiMAX) to pop up in 2006.
9 Comments
View All Comments
TrogdorJW - Tuesday, February 24, 2004 - link
Ugh... IPS was supposed to be IPC.IPS has been proposed as an alternative to MHz as a processor speed measurement (Instructions Per Second = IPC * MHz), but figuring out the *average* number of instructions per clock is likely to bring up a whole new set of problems.
TrogdorJW - Tuesday, February 24, 2004 - link
The AMD people will probably love this quote:"We still need to answer the question of how we are going to get from here to there. As surprising as it may seem, Intel's answer isn't to push for ever increasing frequencies. With some nifty charts and graphs, Pat showed us that we wouldn't be able to rely on increases in clock frequency giving us the same increases in performance as we have had in the past. The graphs showed the power density of Intel processors approaching that of the sun if it remains on its current trend, as well as a graph showing that the faster a processor, the more cycles it wastes waiting for data from memory (since memory latency hasn't decreased at the same rate as clock speed has increased). Also, as chips are fabbed with smaller and smaller processes, increasing clock speeds will lead to problems with moving data across around a chip in less than one clock cycle (because of interconnect RC delays)."
Of course, this is nothing new. Intel has been pursuing clock speed with P4 and parallelism with P-M and Itanium. In an ideal world, you would have Pentium M/Athlon IPS with P4 clock speeds. Anyway, it looks like programmers (WOOHOO - THAT'S ME!) are going to become more important than ever in the future processor wars. Writing software to properly take advantage of multiple threads is still an enormously difficult task.
Then again, if game developers for example would give up on the "pissing contest" of benchmarks and code their games to just run at a constant 100 FPS max, it might be less of an issue. If CPUs get fast enough that they can run well over 100 fps on games, then they could stop being "Real Time Priority" processes.
It really irks me that most games suck up 100% of the processor power. If I could get by with 30% processor usage and let the rest be multi-tasked out to other threads while maintaining a good frame rate, why should the game not do so? This is especially annoying on games that aren't real-time, like the turn-based strategy games.
TrogdorJW - Tuesday, February 24, 2004 - link
"As for an example of synthesis, we were shown a demo of realtime raytracing. Visualization being the infinitely parallelizable problem that it is, this demo was a software renderer running on a cluster of 23 dual 2.2GHz Xeon processors. The world will be a beautiful place when we can pack this kind of power into a GPU and call it a day."Heheheh.... I like that. It's a real-time raytracing demo! Woohoo! I've heard people talk about raytracing being a future addition to graphics cards. If you assume that the GPU with specialized hardware could do raytracing ten times faster than the software on the Xeons, we'll still need 5 GHz graphics chips to pull it off. Or two chips running at 2.5 GHz? Still, the thought of being able to play a game with Toy Story quality graphics is pretty cool. Can't wait for 2010!
Shuxclams - Tuesday, February 24, 2004 - link
Oops, no comment before. Am I seeing things or do I see a southbridge, northbridge and memory controller?SHUX
Shuxclams - Tuesday, February 24, 2004 - link
HammerFan - Tuesday, February 24, 2004 - link
Intel probably won't use an onboard mem controller for a long time...i've heard that their first experiences with them weren't good. Also, the northbridges are way too big to no have a mem controller on board.*new topic*
That BTX case looks wacky to me...why such a big heatsink for the CPU?
*new topic*
I have the same question Cygni had: Are their any CTs in these pictures, or are there none out-and-about yet?
Ecmaster76 - Tuesday, February 24, 2004 - link
I counted eight dimms on the first board and either six or eight on the second one. Dual core memory controller? If so it would help Intel keep the Xeon from being spanked by Opteron as they scale.capodeloscapos - Tuesday, February 24, 2004 - link
Quote: " It is possible that future games (and possibly games ported by lazy console developers) may want to use the CPU and main memory a great deal and therefore benefit from PCI Express"cough!, Halo, Cough!, Colin McRae 3, cough!...
:)
Cygni - Tuesday, February 24, 2004 - link
I like the attempt to hide the number of DIMM slots... but i think its still pretty easy to tell how many are there, becaouse of the top of the slots still showing, as well as a little of the bottom of the last slot.So, is Intel trying to hide that Lindenhurst is 64bit (XeonCE) compatible, or am i off base here?