14:16 也谈CPU和GPU之间的关系:融合、取代还是共存? » 博客@英特尔中国

最近一段时间,关于“在PC机中,是CPU重要还是GPU重要?”,“是CPU要融合GPU,还是GPU要取代CPU?”是业界比较热闹的话题之一。今天,我也来凑凑热闹,发表一下我自己的观点和看法

一.首先,来看看CPU和GPU的诞生

CPU(Central Processing Unit),即“中央处理单元”,更多人把它称为中央处理器(Central Processor),或者微处理器(Microprocessor)。其实,更准确的全称是通用微处理器(General Purpose Microprocessor)。中央处理器名副其实,它是整个计算机系统中最为重要的核心部件,就是电脑之“脑”,电脑的“思维中心”。

世界上第一颗微处理器Intel 4004,是由英特尔工程师泰德•霍夫于1971年发明的。而首款真正的通用微处理器是Intel 8080,于1974年诞生,时钟频率为2MHz。

GPU(Graphics Processing Unit),即“图形处理单元”,早期图形处理单元只是以“显示输出子系统”概念出现在系统的设计框架中,物理形态就是显示卡,主要以2D 图形(二维图形)处理和输出为主,后来又起到了图形加速的作用,所以也叫图形加速器(Graphics Accelerator)。以“GPU”概念的独立形式出现是伴随着3D游戏(三维游戏)蓬勃发展孕育而生的。这个时间点比较晚,大概在上个世纪末(大概在1995年左右)。

今天GPU在计算机系统的作用已经从以前不显眼的角色上升到比较重要的位置,但主要还是是辅助和分担CPU来执行3D图形加速方面的大量计算。为了突现它的重要性,有人开始把图形处理单元(GPU),也称为图形处理器。

PC机领域的第一款GPU是3dfx 公司1995年推出的Voodoo 3D图形加速芯片。

二.当今CPU和GPU各自的设计目标

因为设计的目标不同,当今的CPU和GPU功能上有本质的不同。作为通用处理器的CPU,顾名思义,它是设计用来处理通用任务的处理、加工、运算以及系统核心控制等等的。CPU中包含的最基本部件有算术逻辑单元和控制单元,CPU的微架构是为高效率处理数据相关性不大的计算类、复杂繁琐的非计算类的等等百花八门的工作而优化的,在处理日常繁复的任务中应付自如。

计算机的“灵魂”——操作系统,以及几乎100%的系统软件都主要仰仗CPU来顺利运行。CPU面对的算术、逻辑运算以及控制处理是非常繁琐和复杂的,面对处理的数据和信息量不仅数量多而是种类多。CPU运算和控制多面手的这种设计,让它在计算机中得心应手,位置不可动摇。

GPU设计的宗旨是实现图形加速,现在最主要的是实现3D 图形加速,因此它的设计基本上是为3D图形加速的相关运算来优化的,如z-buffering 消隐,纹理映射(texture mapping),图形的坐标位置变换与光照计算(transforming & lighting)等等。这类计算的对象都是针对大量平行数据的,运算的数据量大,但是运算的类型却并不复杂,大多类似和雷同,计算性强但是逻辑性不强,如矩阵运算就是图形运算的典型特性。

综上所述,GPU在图形计算方面的一些特长在是今天的CPU无法比拟的,当然将来融合了GPU的CPU就另当别论了。相比CPU的通用运算和复杂逻辑处理,GPU要想代替CPU来运行操作系统和系统软件,是不是有点“越俎代庖”的艺术夸张了。当然,只有一种可能,就是GPU做成了CPU,加入了大量CPU核心设计的GPU。不过,这样的GPU还叫GPU吗?

我个人的观点是:在主流市场,将来把GPU集成到CPU中是大势所趋,对于高端市场——如高端3D游戏应用,CPU + 独立GPU的并存方案依然会延续很长时间。GPU中强大的平行数据的并行计算能力,特别是在3D、浮点运算方面,在没有图形运算任务的时候,如果可以开发出来支持CPU在科学计算方面的应用,当然是件好事。

GPU会不会像80386时代,最初作为独立芯片的数学协处理器80387,最终融合到CPU中而消亡了呢?还是让市场去解决这个容易引起论战的问题吧,任何争论在市场的最终选择面前都会显得苍白无力。

一下篇,我将具体说说CPU和GPU在今天计算机中的作用和位置,以及它们各自擅长和不擅长的方面,均衡计算在设计系统的重要性,将来趋势的探讨等等。

Comments (0)
13:17 Google AdSense 让梦想得已延续 » Google AdSense China Blog


自打90年代末开始接触电脑就着迷了,直到现在,从DOS学起,经历了Windows好几个版本,同时经历了互连网的风风雨雨,一路走来,算算自己也是老油条了。前期我是着迷电脑硬件,但是渐渐发现网络的诱惑力更强,因为可以与更多的人交流。还记得自己的第一个网页是从8u8开始,相信那个年代走过来的网 友一定知道这个网站,他们提供免费的空间、模版、二级域名,那时班上的同学电脑懂得比较多一点的差不多都有一个属于自己的个人主页,虽然只是简短的文字、 几张图片,但是里面却充满了对互连网的无限向往……

参加工作不久,公司接到一个政府网站的单,但是没有人懂这个,只有我还算知道一点,呵呵,其实我也是"初生之犊不怕虎",一件事只了解1%,也坚信自己可 以搞定全部。那时Boss还非常重视,专门打印了厚厚的一沓关于域名、虚拟主机、FTP方面的资料让我参考。有了第一次的小小成功,我一下成了公司里的互 连网'专家',开始更深层次
的接触网络。

帮企业做网站久了,就老想着在网络有个自己的一片空间,但这只是一个梦想,因为当时的域名空间费用价格高的离谱,并非我等刚毕业的穷人可以随便拥有。办法总是有的,那时就想出了在企业网站下建一个子目录来放自己的网页,然后去申请一个免费的二级域名指向这个目录,有了这样一个网站已经让自己感觉很牛了!那时有主页的人见了面通常会问对方“你空间几M的?”,而今天大家会问得是“你流量多大?”,或许这也可以代表互连网的发展历程吧。

05年我开始真正拥有一个自己的网站,正是到处都在报道'互连网的春天来了'的时候,那时还没有用网站赚钱的概念,只是爱好,一直以为只有大型网站才能接 到广告,赚取广告费用。但是一段时间后发现事实并非如此,有人开始在BBS炫耀自己的网站开始盈利了。于是我也开始自己的赚钱之路,先后试用N家广告联 盟,不是被人家拒绝,就是拒绝了别人,原因:大点的广告联盟瞧不起流量小的网站,小点的联盟我又不喜欢他们的'狗皮膏药'广告出现自己的页面上。唉,看来春风并没有吹到俺的网站上啊。

经历了无数次失败,才知道Google AdSense,晕吧,凭着对Google搜索的信任,我坚信这会是最终的选择,于是注册、等待审核、投放代码,时间虽然过去那么久,今天还依然记得在自己的网站看到公益广告的兴奋,第一次感受到原来Google离自己那么近,直到后来第一次在自己的网站看到图片广告,都留下了很深刻印像,可能这一切对于今天新成长起来的站长算不了什么,那是因为你没有看到Google AdSense 在中国的成长过程。

投放Google AdSense 算来这是第三年了,说说三年来我遇到的一个最大的困扰吧:

担心封号!因为老是有人说,自己的收入快有100美元,账号却封了,并且是不给解释的机会的,于是我每次收到Google AdSense为标题的邮件时都要虚惊一场,总是担心收到账号被封的通知,后来渐渐发现,事实并非如此,Google AdSense 会耐心指出你违规的地方,也会耐心解答你的问题,记得有一次我把AdSense投放到以MTV视频在线视听为主的页面,几天后就收到AdSense的来信,被告知类似这样有版权的页面不可以投入Google AdSense,并且给出相关政策的链接。我是立刻做出回应的,事实证明并没有因此无情封号。此后,每次做一些广告上的改动,我通常都会先确认一下相关政策,实在不能确定的就发信问一下Google AdSense 小组的工作人员。我想正是因为如此,我才能安安稳稳做到第三年吧。试想那些被封了账号的站长,他们其实是最知道封号原因的。

通过Google AdSense 我没有一夜暴富,但是让我网站梦想得以延续,这已经足够了,试想你是一个如此热爱网络,渴望与别人分享收获、分享快乐的人,AdSense 在不影响你网站外观的同时,又带给了你一份额外的收入,这又何尝不是一个惊喜?我始终认为我们做站是为了给浏览者提供一点点帮助,可以得到他们的认可,虽然大多数浏览者只是一位匆匆过客,但是相信他们会对你所给的帮助心存感激。当然,如果你做到了这一点,Google AdSense 也会给你一个大的惊喜!
11:59 网摘与注释: 呼之欲出的垂直知识网络 » 大学小容>善用网络,助益成长!

Diigo是一个好东西,现在小容已经使用Diigo一段时间了。小容设置了一个名字叫做Oliver Ding’s Scrapbook的小组,在这里,小容分享自己的网摘和注释给朋友,朋友也可以分享网摘和注释和小容。每个周末,Diigo会将这个小组里一周来的网摘和注释条目通过电子邮件发送给小容。

从现在开始,如果发现某个话题的网摘积累到了一定数量,小容会将这些网摘汇总贴在Blog里,希望这样这样不定期对网摘的进一步整理,是一个学习的好方法。这些不定期的整理,将放置在“网摘与注释”和“Scrapbook”这两个标签下面。


Henry Jenkins: 为什么学者应该Blog? by
Oh My Media | 媒介与传播研究…

原文引用:

运转一个blog是一项承诺和义务,应该被视为你职业的一大堆义务的一部分。当我最初以学者身份开始blog之时,曾向其他blogger寻求建议。他们 强调的是,重要之处在于设置发表你blog的日程表并且坚持它。至于你到底是每周写一次还是每天写一次,这倒不要紧,重要的是你得坚持添加内容。否则,在 任何一天不去写一篇blog文章都是非常容易的。然后经过一段时间,这种偷懒的诱惑的积累会让你最终远离blog。在你的“困难时期”,设置截止日期和有 策略地生产内容非常重要,某种程度的“纪律”会让你一直保持动力。

小容的注释:

每一个知识工作者都应当将知识探索的过程公开在网络上进行,让知识探索的过程得以社会化,并且以更快的速度进行。对于中文blogosphere来说,只有当越来越多的知识工作者开始写blog,并且坚持写blog的时候,垂直领域的信息才会涌现,中文web的信息价值才会逐渐跃升到知识探索层面。

信息的泛滥和匮乏是相对的 by 一言谈| New Media Observe…

原文引用:

不容忽视的是,基于垂直内容需求的人群正在形成,我自己也正是这很多个需求群体中的一个,他们只需要他们关注的那个领域的足够信息,至于其他的,剩余的20%时间就够了。

小容的注释:

需要水平过滤器的时代,还需要垂直信息的把关人。

更科学地让心理学家为自己所用 by 欧阳的心理学笔记

原文引用:

人们之所以对心理学家感兴趣,更多是对自己的问题感兴趣。创业者希望认识心理学家,希望了解的是,心理学家能帮助自己调节心态,同时更希望心理学家能够帮 助自己,像史玉柱一样,充分地利用人性弱点,让消费者更好地为自己买单,广泛地扩大自己的用户群。年轻妈妈对心理学感兴趣,希望了解的是,如何更好地教育自己的小孩,让他快快乐乐地成长。白领女性对心理学感兴趣,更多希望地是,能更好地获取一些关于自我成长的知识。

小容的注释:

这篇贴子谈的是心理学家和大众需求之间的鸿沟,实际上,这是在各个学科和产业普遍存在的现象。学理性知识是基础,而实践性知识是应用。显然实践需要理论指 导,否则人们就找不到方向,会迷失在纷繁复杂的表象中。而学理性知识也要从实践中吸取养分,否则容易沦落为空中楼阁般的学术游戏。

《社会网络与组织》不适合SNS网站创业者阅读 by 豆瓣读书

原文引用:

《社会网络与组织》更准确的名字应该是《如何分析和研究社会网络与组织》创业者就是企业家,需要应用科学家、理论家的分析研究结果,而不是自己去分析和研 究一些理论。我们做SNS网站创业,在“社会网络和组织”方面,最需要的是两种书:第一种,描述现实生活中“社会网络和组织”是怎样的,有什么特征。第二 种,在网络上形成的社会网络和组织,与网络下有什么不同(长尾理论就是这样的书:描述网络经济和网下经济不同的地方)。

小容的注释:

很有意思的观点,它引发出一个值得深思的问题:大陆网络创业者的知识从何而来?

完全克隆:最简单的做法,直接拷贝国外的网站的结构、设计和运作模式;

一点创新:在国外网站的基础上,加上自己理解所发展的东西——不是所有国外的东西都是最完美的,他们也是在发展过程中——以及思考和本地市场接轨需要修改的东西;

完全开发:开发一个国外也没有的现成产品,那么,就需要自己从头到脚来开发了。那么,有多少团队具备完全开发的所有知识呢?!

人因实验室群体blog开通 by 欧阳的心理学笔记

原文引用:

人因是一个我非常看好的领域,也是心理学大有可为的领域。个人一直觉得,心理学在工业、设计、创意、互联网领域的商业价值远远大于在心理咨询领域的商业价 值。无它,在心理咨询领域,心理学的发展过于依赖专家的个人价值、个人口碑。专家的技能、咨询的效果缺乏一个定量的评价体系,从而很难具备大规模复制的可 能性。

人因实验室这个项目就是一群志同道合的朋友,在这个领域的探索。人因实验室是一个由人因领域的专业研究者、爱好者与商业推动者共同维护、参与的一个研究性组织。随着人因实验室的发展壮大,不排除它成为商业组织的可能性。

小容的注释:

群组Blog是一个很好的搭建复合性知识社群的基本工具,不同知识背景的人可以围绕一个核心实践领域展开交流、探讨和传播。在一个群体blog的基础上,加上Google Groups以及各类SNS网站里的Groups,就可以快速建立一个复合性知识网络。同理,单一性知识社群也可以使用同样的方法,去聚集相同知识背景的人。

SEO Secrets of AskApache.comAskApache Web Devleopment » Che, Dong's shared items in Google Reader

SEO Secrets for Google High RankingFirst let me say that I am not into SEO, I don’t read the research, I don’t participate in the SEO community, I only have an interest in regards to the technology used by the search engines. All websites are hosted on servers, mostly Apache, and that is the primary topic of this blog. During the past year of this blog, my research into non-seo areas has turned up some very valuable SEO techniques.. All of them legal, ethical, and genuinely good for the Internet at large.

Some Background

I started this blog in January 2007, my first foray into blogging, and I’ve been extremely successful at achieving top ten google rankings and maintaining on average 15K unique visitors/day (per google analytics) 85% of which come from search engine traffic.

NOTE: I take it for granted that anyone reading AskApache is an expert of some skill, if you aren’t I apologize, I can’t waste time on the easy stuff.

Prerequisite SEO

There are literally hundreds of thousands of SEO articles on the net, 99.9% of which are absolute garbage. Especially in the sense that they just repeat the same 10 year old stuff. However, to do any kind of advanced SEO like I am going to discuss in this article, I am assuming that you, the intelligent reader, has already read those and has a basic understanding of SEO fundamentals like meta tags, titles, keywords, etc.

First, Great Content

The foremost and most important step in achieving any kind of traffic is to produce great content. I’m sure you’ve heard that a million times, but let me break it down how I perceive it. Before I even started to mess with SEO for AskApache.com I began by writing articles. At that point I didn’t have a clue what my blog was going to be about or even if I was going to be doing it after a week.

What is Content

For me, being a top-paid professional web developer, I spend about 80% of my time doing research. I think that is a bit uncommon, but its a throwback from the 10 years I spent in the network/computer security field, where research is 99% of the job, a story for another time perhaps.

So the research I was doing at that time was about best-practice standards-based web design, mainly XHTML Strict, CSS, and unobtrusive javascript. Each of those subjects has become near and dear to my heart, and each should also be mandatory learning for anyone interested in SEO. The best advice I can give towards that end is checking out the CSS, Javascript, and XHTML Strict source code for this page and site. And of course the holy W3.org.

In addition to striving to master those 3 subjects, I was also and always will be researching web programming languages like PHP, Ajax, Ruby, and Server Technology like Apache. Although I should note that my research into Apache and server technologies is more of a hobby than a job requirement, also a throwback to my days in the security industry and of course my love for open source software.

My Content

So basically I was spending 25% of my time at work actually working, and the other 75% of the time I would research how to do something better, faster, the best. Incredibly, I discovered or re-discovered a ton of tips, tricks, and methods to aid me in my work. I was learning so much valuable information that I joined a couple of forums to discuss them and get feedback on making them even better. Soon I realized that I was one of a small few who actually post content to a forum instead of just questions, so I decided to write my tutorials down on a blog, and AskApache was born.

So that is why this blog is comprised of almost 100% tutorials, and why almost all of them are completely original works you won’t find elsewhere. That’s how I create content, but you might do something different. Whatever it is that you do for content, just make sure you are providing VALUE with everything you do. Not to everyone, just stuff that you would consider to have value if you were reading it.

Second, Great Site

Ok so I had 10 or so great articles that I knew would provide value for many web developers, but so what? Nobody cares you know.. That’s when I decided to take a closer look at the software that was running my new blog, WordPress, and I’ve been hacking the code ever since on my never-ending quest to be the best and know the most advanced web development. You’ll see why in a couple paragraphs.

I Mean, a Really Great Site

Google Want A Spider WebBy great, I mean you need to make it incredibly user-friendly. Every design and development decision you make should be about the visitor. THATS the number one key to success on the net, regardless of endeavor. Here is a list of things you definately need to have before you do SEO, I’m not listing obvious stuff like descriptions, titles, and good writing.

  1. Intuitive and circular, your website should be a spider-web of urls.
  2. Easy to read, plenty of white-space, design is your decision but I like minimalistic.
  3. Super fast rendering. You need effective caching and optimization.
  4. A very helpful 404 error page, hopefully never seen.

Focus in on your URL’s

Many sites that use a CMS of some kind, be it Drupal or WordPress, have hundreds or thousands of URL’s even if they only have 10 actual posts/articles.

Removing Duplicate Content

You’ve all heard this before, but almost no-one has taken it to the level I am going to discuss. Bear with me.

Removing duplicate content is actually a very straightforward process if you know what you are doing, and if you don’t, well that’s why I’m going to quickly explain how to really do a good job.

Locate Duplicate Content and URLS

People misunderstand that you should just not repeat the same paragraph in a different article, that is partially true, but the main impact this has on your site is if you can access the same article from more than a single URL.

I hope you realize you MUST use pretty urls like my site and not codey looking ones with question marks. You can find any potential duplicate urls on wordpress with the rewriterules plugin. Also look at Google’s webmaster tools to look for any duplicate urls, and you can use xenus link sleuth tool as well.

Remove Duplicate Urls with .htaccess

Once you’ve found duplicate urls, you need to instruct google and other search engine robots to be redirected to the correct url. By doing a 301 redirect you tell the search engines NOT to index the bad url, only the good one. Below are some of the .htaccess code I use on this site to accomplish this technique, this is gold I myself use so pay attention. It works.

301 Redirects with mod_rewrite

First lets start with one everyone should know, and the most common, to www or not to www?

RewriteCond %{HTTP_HOST} !^www\.askapache\.com$ [NC]
RewriteRule ^(.*)$ http://www.askapache.com/$1 [R=301,L]

Its a highly rare individual who has seen this one, which forces requests for .html/ to .html

RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /[^\.]+\.html/\ HTTP/ [NC]
RewriteRule ^(.*)\.html/$ http://www.askapache.com/$1.html [R=301,L]

301 Redirects without mod_rewrite

For the rest of the duplicate urls that you find, I like to use Redirect and RedirectMatch.

This redirects requests that start with an & or /&amp or /( to my homepage.

RedirectMatch 301 ^/&(.*)$ http://www.askapache.com/
RedirectMatch 301 ^/&amp(.*)$ http://www.askapache.com/
RedirectMatch 301 ^/([\(]+)(.*)$ http://www.askapache.com/

This redirects requests with //whatev to /whatev

RedirectMatch 301 ^//(.*)$ http://www.askapache.com/$1

But this is just a brief look at what you will have to spend some time on. There are detailed guides to doing this with mod_rewrite and using Redirect on my blog. Its time now for some real SEO tips. The heart of the matter, as it were.

Wrap It Up

So I realize that was brief, so I want to really stress 2 things or you won’t take away much from part 1.

Locate Duplicate URLS

This is truly one of the most important things in my personal experience. I personally take this as extreme as I can, I regularly grep my access files, mod_security and error log files looking for bad URLS. I am always checking them out to see if someone has a bad link to me somewhere, or if someone just typed it in wrong. If its a bad link on a site, I will very politely attempt contacting the webmaster about it until they fix it.

Even I, with my many colorful years of Internet travel, was caught off-guard by the variety and creativity and the sheer number of urls people are using to link to my site. I found that often bad links would be published because my URL was just too long, so I shortened the URL’s. Now of course bad links can’t really even touch my site with all my 301’s in place.

Besides grepping your server’s logs, the 2nd best place to locate duplicate urls or just plain wrong urls is by using Googles free webmaster tools. They keep track of all the bad urls linking to your site and allow you to download this data in a .csv spreadsheet format. The first time I checked into this I found over 1,000 bad links, after a couple months with my RewriteRules and 301 Redirects, I’ve narrowed the list down to under 50 most months. That is a powerful reason to use 301 Redirects, as we’ll really get into in part 2.

301 Redirect Bad URLS

Finding the bad urls takes some time, a couple hours even, and then the whole reason you do that is to be able to create 301 Redirects for all of those bad urls to good urls.

One reason that I wasn’t even aware of until several months ago is that when Googlebot locates a bad URL for your site, it tries to access it, and if you haven’t planned for this in advance, your page most likely will return a 200 OK status header, or if you are lucky a 404 Not Found error, both of which really hurt you.

Basically, a 200 response will produce duplicate content in 99.9% of the time, and 404 responses will whisper to Google’s algorithms that you don’t know what you are doing and your site isn’t authoritative. 200 means google will index your site, 404 means google won’t index your site, but it also won’t give up trying for awhile, which takes away from your real urls.

What a 301 Response tells Google

301 Responses were practically invented for user-agents/clients/web-crawling robots like google. They instruct the client, whether that be a persons browser or a googlebot, that the resource/page that they are looking for is actually at a different URL. This is an authoritative response that makes googlebot and other search engines ecstatic because now they can give up on the 200 and 404 responses that didn’t really give them an answer either way.

On the other hand, a great 404 can and should be just as powerful as a 301, but hardly anyone uses them in the correct way according to HTTP 1.1 or 1.0 Specifications. We’ll tear that subject apart further down the road.

I’ll leave this topic for now with one last idea, 301 Redirects when implemented and used correctly, actually redirect the page rank and search engine ranking for itself to the redirected / correct url. That means if you have 1000 unique links pointing to your article, and all of them are incorrect in some way, if you can 301 redirect all of those bad links to your correct link you now have 1000 new good links! It has to be done right and in a classy way though of course.

Now that you have content and a great site, its time to SEO like a mofo.

I just have a few things left before I publish Part 2, which goes into the actual details and SEO Secrets I use. This intro post was a neccessary evil I’m afraid.

Random Posts

00:29 数学是被发现的还是被发明的? » 格致 - 理解世界,享受科学

Solidot 上有个有趣的帖子: 数学是被发现呢还是被发明呢?指向 Science News 上这篇文章:Still debating with Plato

柏拉图主义者的回答是“被发现”,这些人中包括了著名的数学物理学家罗杰彭罗斯(Roger Penrose)爵士。他们认为数学陈述的对和错与个人信仰无关,暗示它们是某种客观现实。这又引发了一个奇怪的想法:客观,那它又在何处。数学真理真的在我们的想象之前就存在?不过从另一方面说,如果数学是被创造的,为什么2 + 2不能等于5呢?

这里学数学和理论物理的人不少,也许会对这个问题也感兴趣吧。

数学一直被认为非常特别,是科学的基础,甚至独立于科学。但我个人会认为数学和物理、音乐一样,依赖于人类社会。如果有外星人存在,那他们很可能会有非常不同的“数学”。如果你持相反观点,请说服我。:)

Science News 上这篇文章引用了欧洲数学学会时事通讯 2007 年六月期上一篇文章 Let Platonism Die。同时还引用了今年的六月期上的三篇文章,这一期应该还没发布。看来值得期待。


^==Back Home: www.chedong.com

^==Back Digest Home: www.chedong.com/digest/

<== 2008-04-28
  四月 2008  
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30        
==> 2008-04-30