I just hope that the AMD chips are released for the PC market and we can get CPU & GPU on the same package (hence using the same memory pointers) where neither do suck.
I don't see the industry getting rid of the PCI-e 16x slots for dedicated graphics cards and gamers.This isn't going to happen for a while yet and we should be careful what we wish for. I would rather be able to upgrade my graphics card when I run 2 4K monitors than have to buy a whole new CPU package as well.
As you said, unified memory architecture needs to come first - get rid of the PCI-E bus completely which means getting rid of the graphics 'card' as we know it. Until UMA comes, there will be no competitive discreet graphics; at least not anything that can compete with an add-in card. Discreet graphics are still in the terrible infancy stage where all they say is 'no'.
The SKU AMD are using are not for public consumption and that is fine, seeing as when the consoles do finally arrive, they will still be less powerful than a good PC and will be completely buried within one cycle. Then again, for the floundering AMD, the chips will probably be a huge leap forward. Ho, ho.
We won't be seeing the dominance of discreet graphics on the PC for a while, that is for sure. It will be fine for the consoles, as they will be hemmed in by all manner of protocols by which the PC isn't restrained. The PC is and always will be more powerful than a console.
Leave discreet graphics for mobile space and just figure out UMA using a swappable graphics chip. That will be sweet. I bet Nvidia thinks so as well.
I don't see the industry getting rid of the PCI-e 16x slots for dedicated graphics cards and gamers.
Perhaps, but only within the same brand. For high-end, the industry will never cooperate to accomplish what you're suggesting, like Intel redesigning a memory controller that coordenates with an Nvidia GPU.This would simply mean getting everyone in the same room as it were to make decisions. It would be cheaper for graphics card makers to not have to make full cards; the GPU would be treated just as another type of CPU (which is where it is already moving, something that started quite a while ago when they became programmable) and the process would involve just producing a chip, not an entire card. This could them be swapped out exactly as a CPU.
Oh and DirectX has nothing direct about it. It's horrible. No wonder consoles can often have better graphics than a more capable PC hardware-wise, and often all you get are shitty console ports.
It's more the way it's (poorly) designed. Libraries or APIs (of any kind of programming, not just graphics) can shield the programmer from those multiple PC configurations and allow writing generic boring code when what matters is programmer productivity, yet give him freedom to innovate or program for speed. For example, 95% of the code using the convenience of libraries, but hand-write those 5% of the code that is critical for speed, image quality or new ideas.I think that's only because they don't have to build a game for a thousand different PC configurations. With a console, it's just one universal system.
Eventually Microsoft caved in to consumer backlash about having a sensor watching their every move even when the console is supposedly sleeping, so the company said okay, the motion detecting sensor can be turned off in the system settings.
Now Microsoft's Marc Whitten has come forward to explain that if Xbox One owners unplug Kinect, the world won't stop spinning. "Like online, the console will still function if Kinect isn’t plugged in, although you won’t be able to use any feature or experience that explicitly uses the sensor," he said.
He also said that Kinect will be totally off when the user switches it off in the settings – there's (supposedly) no secret sleeping mode. "You have the ability to completely turn the sensor off in your settings," he added. "When in this mode, the sensor is not collecting any information. Any functionality that relies on voice, video, gesture or more won’t work."