22:38 播种品牌网络视频的tips » laolu's blog: Blog

继续病毒视频的话题。从FEED公司网站上看到一篇病毒视频播种技巧的文章,是其总裁Josh Warner所写,也发布到了MarketingVOX上

原文网址:How-To: Tips for Seeding Branded Web Videos,文中视频均来自YouTube。以下为翻译:

播种品牌网络视频的提示

作者:Josh Warner (FEED公司总裁)

对广告代理机构和品牌来说,病毒视频营销是对传统网络营销的一种低成本、高价值的补充。它给代理机构和客户一个机会,尝试一些不适合电视或前卷和后卷格式[译注:pre-roll前卷和post-roll后卷,指置于影片前面或后面的广告]的冒险创意。

它也可以看做是慷慨的ROI(投资回报)。流行的视频持续被观看,有时甚至是在它们被在线播种的数月之后。

对许多代理机构和品牌来说,策略和执行的问题,一直围绕着这依然是一种相对新的营销实践。以下是策划品牌网络视频播种活动的提示:

网络视频并不适合每个品牌

不是每个产品的活动都适合于网络视频播种。如果你出售类似洗涤剂这样的产品,需要更多深入一种基于特性的销售途径,你也许要抓住传统电视、印刷品或报纸插页。

当然,会有例外。2006年,搅拌器公司Blendtec以“这能搅拌吗(Will It Blend)?”活动,展开了病毒企划:一个穿着实验室白大褂的人,搅拌了各种不太可能搅拌的东西。这个作品系列仍在持续,看看他搅拌一个iPhone 3G[见下,今年7月11日上传,观看次数已超过230万次]。

尽管如此,总的来说,视频播种对于具有强大网络用户基础的“生活方式品牌”最理想。

视频播种并不适合每个创意

当考虑在网络上播种一个视频时,要确定这个作品足够好,以至于让受众会做一些提升你的事情。问问你自己:我会把这个视频分享给朋友吗?

如果答案否定,那长片(long form)或前卷或后卷视频格式,也许是更好的方式(如果你不撕碎整个构思的话)。

如果答案肯定,那一个良好执行的视频播种过程,能保证你的视频抵达引爆点。你的创意将决定它能走多远。

你是自己做呢,还是用一家视频播种公司?

品牌视频的病毒成功,很大程度上取决于它被播种得有多好。在预算紧的时候,利用内部的资源,比如实习生,来播种你的视频是一个好选择。但专业的播种公司能确保他们在线播种[视频]的观看次数——起步10万次。在抵达那些典型地分享和讨论视频的YouTube、Facebook、MySpace及其他网站、博客和新闻组方面,播种公司是专家。

这一保证,连同抵达、约定以及他们提供的报告,是为什么许多广告主优先选择专业播种公司,作为一种更基本途径的原因。

广告活动与播种公司的匹配

各视频播种公司有不同的约定模式和长处。选择一个适合你客户和活动的。

NMS[译注:估计是NMS communication?]提供buzz生成,并以其社会化媒体报告而知名。作为Kontraband的一部分,7th Chamber[译注:http://www.the7thchamber.com]在它拥有和运营的多个网站的网络内分发视频。Feed Company则基于你的视频类型,在一个有多个网站、博客和社会化目的地的网络内,定向到用户和影响者。

任何模式都能产生效果;你的选择应当取决于你的创意和你的目标。

透明

确保你的视频播种公司在其营销和报告过程中,负有责任和公开透明。了解预期是什么:他们向谁营销你的视频?他们怎样与YouTube上的用户通过email沟通,以及其他网站上的用户?他们是否会执行毁坏控制(还有怎么做)?他们是否会披露他们是谁,以及他们与你活动的联系?(确定他们是坦诚的!如果他们假装为fanboy[译注:fanboy,指沉迷于漫画、电影、游戏等事物的孩子,约等于缺乏理性的崇拜者或粉丝,参见此文],博客们会将你的工作视为不可接受的垃圾。)

一家专业的视频播种公司应当有答案,以及对这些话题的观点,但请记住:透明公开的工作最好。当你意图明朗、身份清晰时,博客们和网站编辑们最能引起反应。

让它几乎是真的

很多最好的品牌病毒视频,是你能想象你自己或你朋友做的事:

Guys Backflip into Jeans” (for Levi’s) [译注:观看次数超过460万,今年5月5日上传]

Ball Girl” (for Gatorade) [译注:观看次数超过70万,今年6月3日上传]

上述视频属于平常人做的非常事的范畴。这些“虚假”的病毒视频,具有“他们怎么做到的?”元素,这刺激了谈论并得以转发,结果达到数百万的观看次数。

不要一下子灌入太多的可信度,否则你的受众将会失去兴趣。最近Wendy's品牌病毒视频可归入后者范畴。见“how it fared on YouTube”(我给你点暗示:不太好。) [译注:观看次数1万多,今年8月22日上传]

用更多的手段发现你的受众

“虚假”的病毒视频可以成功,但你不能愚弄人们,骗他们过来。让创意来支配——其他什么时候你还有这么小的预算呢?

看看哈根达斯(Haagen-Dazs)的“Bee-Boy dance crew drops dead”。[译注:视频见下,今年7月15日上传,观看次数超过120万,该视频用于推广Häagen-Dazs - Help the Honey Bees网站——以拯救蜜蜂为主题来营销哈根达斯冰激凌]

在短时间内就活跃起来,电子艺界受人关注的Red Alert 3 Remix也得到了好评和病毒规格的观看次数。[译注:视频见下,今年9月18日上传,观看次数已超30万。]

如果你有个故事要讲,那一定在某个地方有它的听众。放松心情,外出活动一下。你应该会觉得好,只要用户在你的作品、品牌与他们的生活之间形成适当的联系。

精心地呈现你的品牌链接

以精妙的方式把品牌呈现给用户。把品牌名称放在视频标签(tag)里,在你YouTube账户里包含一个品牌元素,以及链回你的迷你站(microsite)(这个迷你站也应该与你视频的内容和基调相吻合)。

我们为Parrot(一家蓝牙技术提供商)播种的一个视频,就是这种精妙策略的好例子:其YouTube账户是“itsparrot”,并且在视频的结尾呈现了Parrot的迷你站网址。[译注:视频见下,今年6月10日上传,观看次数超过60万。]

对于我们的Levi's网络视频活动——与Cutwater前后联合执行,其YouTube账户是“unbuttonedfilms”。

在两者相关时,不要害怕在视频中注入品牌体验。对于网络视频“Super Mario Rescues the Princess”,动漫制作者Seth MacFarlane加入了赞助商汉堡王(Burger King),作为一种开放的特性[译注:原文没有该视频的链接,在YouTube找到了这个视频,上传于今年9月9日,观看次数竟然高达730多万!汉堡王出现结尾的字幕中。见下]:

Super Mario Rescues The Princess: Seth MacFarlane's Cavalcade

超越观看次数

观看次数是判断视频播种活动成功与否的关键评估。但不是惟一的。测量还涉及讨论期、评论、博客提及、搜索结果以及社交网络。(Facebook Lexicon也许对这一特定网络来说被证明是有用的。)关于你视频的讨论的质量,也必须给予考虑。

主流媒体——ABC、CNN、MSNBC以及其他,偶尔也会播送或在网络上播放成功的视频。

对品牌网络视频,通过超越观看次数,以及建立附加的评估,这一实践,以及这一行业,将能演变得多样和成熟,而又不失流媒体的魅力。

/* 病毒视频是一种营销补充。投资回报高。策略和执行,创意、播种、评估。 */

22:20 Poor man’s query logging » MySQL Performance Blog

Occasionally there is a need to see what queries reach MySQL. The database provides several ways to share that information with you. One is called general log activated with --log (or --general-log in MySQL 5.1+) start-up parameter. The log writes any query being executed by MySQL to a file with limited amount of additional information. The other is slow log enabled by --log-slow-queries parameter (MySQL 5.1 requires also --slow-query-log), which was designed to store poorly performing queries that run at least 2 seconds. Percona actually extended the slow log to, among others, include any query regardless of the execution time.

The problem is that for both you need to prepare earlier either by enabling the logging before starting the database instance or, even more work, by applying the patch and rebuilding the entire database from sources.

I know that many databases out there run with none of these and it would require a restart to get the logging in place and possibly another restart to disable it when no longer necessary (though actually slow log can be disabled by simply setting long_query_time MySQL variable vale high enough).

So what can be done when you really need to see the queries, but can’t afford any downtime?

If you are a privileged user (i.e. root), you can use tcpdump on a database server to take a peek into a network stream and filter for packets that go to MySQL. Those packets contain queries. Here’s my quick one-liner which I will write in multiple lines:

garfield ~ # tcpdump -i eth0 -s 0 -l -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
  if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
    if (defined $q) { print “$q\n”; }
    $q=$_;
  } else {
    $_ =~ s/^[ \t]+//; $q.=” $_”;
  }
}’

The output may contain little garbage, but it can be easily filtered out.

Obviously this method works only when applications communicate with MySQL through TCP sockets. When localhost (not to be confused with 127.0.0.1) is used as a MySQL host, this will not work since all traffic goes through a unix socket file.

It’s most definitely not a MySQL log replacement, but can be very useful if you need just a few minute dump.

Maciek


Entry posted by Maciej Dobrzanski | 2 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

「1萬小時的努力 + 寧為雞首」是自古以來成功定律 (Mr. 6)mr6.cc » 车东's shared items in Google Reader

「1萬小時的努力 + 寧為雞首」是自古以來成功定律

by Mr. 6 on November 3rd, 2008, 目前有 13 則留言, View blog reactions

知名暢銷作家Malcolm Gladwell(本文就以「蓋兄」稱)於2000年出版《引爆趨勢》(The Tipping Point),以及在2005年出版《決斷2秒間》(Blink)後,即將在下下周(11月18日)出版他的第三本書,《Outliers : The story of success》,到了這星期,關於這本書的評論也在歐美各雜誌達到了頂點,包括紐約時報泰唔士報等都有討論。

蓋兄的寫作風格就是他會引用大量的各種數字,許多是讀者從未聽過的,然後一段一段的串成整本書的邏輯,讀他的書好玩的應是在讀這些一段一段的定律,各自就有各自的領悟。但這種寫法也為他帶來很大爭議,許多人無論是真的科學家或是記者們,都覺得蓋兄只是在玩弄「通俗科學」,我的想法則是,我們都需要更多的數據與事實,蓋兄幫我們挖出來,至少建議了一個解釋,比讓讓它們石沉大海還好。

那,蓋兄這次又想告訴我們什麼有趣的數字、事實?這次他想討論的是大家都很關心的「成功學」,而且他發現了一個很特別的結論──「成功並不是偶然的」。我還不知道是誰會在台灣出版這本書,也沒辦法先拿來一讀,但單單從各處報導捕風捉影一番,已經得到了很驚人的「成功學」:

一、寧為雞首,不為牛後:蓋兄在這本書中即將舉出一些現象,譬如,曲棍球是加拿大的國技,奇怪的是,史上最強的加拿大曲棍球員似乎都是1、2、3月出生的,原來,曲棍球比賽從小「分班」的方式是依出生年份,同一年出生的就在一起比賽,因此,一月一日就是班上最年老的,十二月三十一日就是班上最年輕的,年老的雖然只老了幾個月,他們的力量強一點點、身高高一點點、智慧多一點點,讓他們一路下來較容易當「小明星球員」,自信心有了,之後也順了。這點亦被經濟學家認同,他們早就發現,在年尾出生的小朋友比較無法進入好大學,也和這種分班方式有關。這也帶給父母一個啟示,有些父母很急著讓孩子小小年紀就送去上學、跳級,蓋兄認為,除非保證孩子真的天生智商180,不然早讀、早學反而容易毀了他前途,請三思。

那,選學校要不要選「好區」、「好學校」?蓋兄另外再舉了例,大律師Joe Flom是在1930年代於紐約當年爛區Garment District出生、受教育,就是因為從爛區出生,他反而有機會去輕鬆取得其它人的東西,也在這過程中,在心理、實力上面不斷的往上積累。重要的是,這位律師後來雖然學成,但遭到當時美國種族排擠,無法進入正常的事務所,所以他竟然也理所當然的就另闢道路,反而讓他開出一條新路(或許在爛學校中他已習慣自創道路並成功,所以他敢繼續這樣幹),後來蓋兄還發現,Joe Flom不是唯一的一個成功的,那年代有好幾個律師竟然都是1930出生、在Garment長大的猶太人!也就是說,那個年代、那個爛區,竟然是成功的搖籃!

二、「一萬小時定律」:蓋兄說,每一個成功的人,都是努力了十年而來。這十年間每天他都練習3小時,3 x 365 x 10,一萬個小時之後才會成功。他舉例,鋼琴家許多都是5歲彈琴,到了15歲就已經和別人不同;小網球選手都是6歲開始訓練,到了16歲就已經和別人不同。這點我非常贊同,先前也想寫過一本「默默耕耘十年間」,最後只出了《搶先佈局十年後》。

以上兩點,告訴我們的是「努力是有效果的」,問題是說,你相信嗎?

我自己小時候,是在一所很普通的公立學校唸書,公立學校再爛也有「人情班」,但我的父母並沒有去要求去念人情班,父母認為家庭教育遠比人情班重要,結果他們對了,我在家很努力的讀書,到了學校,我的努力也馬上顯示在考試的分數上,一路都是第一名。後來到加拿大,開始碰到一些「天才」,讓我非常沮喪;我念了這麼多、想了這麼久,人家一下子就想通了!但,由於兒時「努力就會成功」的經驗,讓我就算受挫也從來不曾放棄努力,所以,雖然有一段期間曾被「天才」們打敗了,但在後來只要一有機會,就可以竄出來、竄出來

1萬個小時,不會白花的,但只是你相信或不相信?若不相信,去作個「雞首」看看,就會相信,相信以後就請記得一輩子。

Malcolm Gladwell這本書和他其他書,甚至其它歐美書都很是不同,他打算這麼直接的點出成功就是「10000個小時、寧為雞首」,這是每個人都可以直接對自己、對孩子做的決定,或許還不必拿到書,今天就可先開始做了。

08:47 Living with backups » MySQL Performance Blog

Everyone does backups. Usually it’s some nightly batch job that just dumps all MySQL tables into a text file or ordinarily copies the binary files from the data directory to a safe location. Obviously both ways involve much more complex operations than it would seem by my last sentence, but it is not important right now. Either way the data is out and ready to save someone’s life (or job at least). Unfortunately taking backup does not come free of any cost. On the contrary, it’s more like doing very heavy queries against each table in the database when mysqldump is used or reading a lot of data when copying physical files, so the price may actually be rather high. And the more effectively the server resources are utilized, the more that becomes a problem.

What happens when you try to get all the data?

The most obvious answer is that it needs to be read, through I/O requests, from a storage that it resides on. The storage is handling reads issued by the system, but only at an extremely limited rate. So when a task is reading a lot of data very quickly, just as the archiving process does when it runs, it causes a huge number of requests being pushed down to the disks and saturating the I/O quite easily. Naturally at the same time the database needs to perform all those regular tasks like serving queries, using and competing for the very same disks to read or write whatever comes so that your favorite website can still show up in the browser. Moreover, reads sent from backup process usually want many sequential blocks of data and such access pattern may be preferred by the I/O scheduler over random I/O coming from MySQL, but also such large I/O requests take significant time to complete and the way typical disks work prevents anything else being executed in the mean time. And so database often needs to wait much longer until disk operations are scheduled and executed which converts into slower queries execution and significantly degraded performance.

Anything else?

All modern systems usually use caching of whatever is being read from a storage. It allows to reduce I/O to such devices on frequently accessed information. After a successful read the block of data is placed in the cache and then served only from the memory should anything ask for the same block again. That happens for as long as it does not get flushed. MySQL obviously takes advantage of this functionality just as any other application and this is especially true for MyISAM tables which have dedicated buffers only to store indexes, while the actual data is always read from disk. The active portions of tables will likely be placed by the system in memory and kept there for a long time. Since memory access is way faster than any disk access, even with the fastest drives or RAID configurations, the performance gains are quite clear. Now going back to flushing the data out of the cache. It happens by replacing old and unused blocks with the newly read ones. And so the more new blocks come, the more older ones need to go away. Just imagine what happens to all the cached data during backup run when the process is reading several times more information than there is physical memory installed on the server - it's not difficult to have a database of such size. Everything is wiped out and replaced by random "garbage" for no good reason. Since the hit ratio becomes worse as the cache is being filled with random information additional I/O occur slowing everything down even further.

What does it all mean?

There is a clear conflict between the regular database activities, which need fast response times, and doing backups which would gladly act as in all-you-can-eat bar. With a busy database server it may turn out that running a simple copy operation from MySQL data directory will result in a total disaster after MySQL stops responding to the incoming queries quick enough.

What can be done about it?

Nothing really when you are using mysqldump. You can play some tricks, but it’s mostly beyond your control.

 

Otherwise, when copying physical data files, in order not to saturate the I/O you can limit the rate at which data is being read. This is unfortunately not something you can do with standard Linux tools like cp, scp or tar. However for example rsync has the possibility to do that with --bwlimit=KBPS option. The problem with rsync is that it needs to build a list of files to transfer before it can take any action and this operation alone is often quite heavy on I/O and is not subject to any limits.

Some time ago we prepared a patch for tar that implements --read-rate=BytesPerSecond. In this case the advantage from using tar over rsync is that you can immediately, on the fly, create a compressed archive. For example:

CODE:
  1. /root/backup-tools/tar --read-rate=15000000 -C /mnt/snapshot -c -z -v -f - mysql | ssh backup@storage-host /root/backup-tools/write_backup.sh

This will read /mnt/snapshot/mysql directory at 15000000 bytes/s creating a TAR/GZ archive out of it and printing it out to the standard output, which is then is redirected through the pipe over SSH to a remote host where a script reads the standard input and writes the archive into a proper location (where would we be without one-liners)

 

On Linux there is also a utility called ionice. It allows to affect how I/O scheduler will be dealing with I/O request coming from a certain process. Giving the backup application low class or priority will cause it won't be getting in the way of the database work so much.

CODE:
  1. ionice -c3 /root/backup-tools/tar ...
  2. ionice -c2 -n7 /root/backup-tools/tar ...

Please cosult ionice man page for usage details, it's really very simple to grasp and use. The restriction for this application to work is that the system must be using CFQ elevator algorithm, it does not work with others. But that is not really a problem since most modern systems run on CFQ by default and even if not, you can change in at runtime anyway. To check what is the current setting you need to query your block devices. In case of the SCSI sub-system (devices named sda, sdb, sdc, etc.) that can be done with:

CODE:
  1. # cat /sys/block/sd?/queue/scheduler
  2. noop anticipatory [deadline] cfq
  3. noop anticipatory [deadline] cfq
  4. noop anticipatory [deadline] cfq

In order to change you will just need to write to scheduler files with the name of the new scheduler:

CODE:
  1. # for device in `ls /sys/block` ; do if [ -f /sys/block/$device/queue/scheduler ] ; then \
  2.    echo "cfq"> /sys/block/$device/queue/scheduler ; fi ; done
  3. # cat /sys/block/sd?/queue/scheduler                                                           
  4. noop anticipatory deadline [cfq]
  5. noop anticipatory deadline [cfq]
  6. noop anticipatory deadline [cfq]

That's it, now you can enjoy experimenting with ionice.

 

It could be also possible to make a backup application that would not interfere with the system cache. By specifying O_DIRECT flag when opening a file, an application tells the system to bypass the cache for it during reads. This is so far only an idea since there are no tools I know of that would support it well. The problem is when accessing a file that was opened with O_DIRECT flag, the file needs to be aligned to the file system block size, so usually it means the size has to be divisible by 4096 in order to read it right. Even though it’s always the case for InnoDB tablespaces, other MySQL data files do not comply with this requirement. The trick could be used here perhaps to read the file with O_DIRECT up to the last full block and then only perform a regular cached read on the last few bytes and append them to the target file.

 

But even if all those precautions have been taken, there are still chances for performance problems to happen on the working instance of MySQL. Such danger may for example come from an unexpected spike in load or traffic, or even from quite expected spikes that you simply can't do anything about. So what I thought could be done here as the next step was to constantly monitor the database status and if any problems were noticed, the monitoring would simply pause the copying. I did a simple Perl script to do just that. It works by sending signals that can either stop or resume the application that copies the data:

CODE:
  1. use POSIX ":sys_wait_h";
  2. use DBI;
  3. $pid= fork();
  4. if ($pid == 0)
  5. {
  6.   exec(‘/root/backup-tools/tar --read-rate=15000000 …’);
  7. }
  8. $was_running= 1;
  9. $is_running= 1;
  10. while(1)
  11. {
  12.   if ($is_running == 0 && $was_running == 1) { kill 19, $pid; $was_running= 0; }
  13.   if ($is_running == 1 && $was_running == 0) { kill 18, $pid; $was_running= 1; }
  14.  
  15.  
  16.   …
  17.   my $sth= $dbh->prepare('SHOW GLOBAL STATUS LIKE "Threads_connected"');
  18.   …
  19.   if ($$row[1] <10) { $is_running= 1; }
  20.   if ($$row[1]> 50) { $is_running= 0; }
  21.   …
  22.   $kid = waitpid($pid, WNOHANG);
  23.   if ($kid> 0) { last; }
  24.   sleep(1);
  25. }

What this script does is to check MySQL status every second and in case the number of connected threads goes above 50, it sends STOP signal to the archiving process. Whenever the number of connected threads drops down back to 9 or less, the script sends CONT signal which resumes archiving. The levels are of course different for every MySQL instance, these are just examples. The cheks can also be more sphisticated or include things like processlist information, CPU load avergages, I/O load, etc.

In this case tar is actually launched by the monitoring script directly, but that’s not really necessary. It simply needs to know PID of the process to manage and have a way to know when it ends.

This is of course just a concept of what I'm using successfully in some difficut environments and you can try building a mechanism suiting your own needs based on that.

Maciek

P.S. If you know someone who does not care about backups, please let him know this URL for our data recovery services.


Entry posted by Maciej Dobrzanski | 16 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks


^==Back Home: www.chedong.com

^==Back Digest Home: www.chedong.com/digest/

<== 2008-11-06
  十一月 2008  
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
==> 2008-11-08