報告題目:協(xié)作智能下的多媒體特征學(xué)習(xí) (Learning Multimedia Features from Collective Intelligence)
報告人:張含望 博士
單位:新加坡國立大學(xué)-清華大學(xué)NExT研究中心
報告時間:2016年5月17日(周二) 9:30-11:00
報告地點:逸夫樓408會議室
報告人簡介:張含望博士是新加坡國立大學(xué)-清華大學(xué)聯(lián)合組建研究中心NExT的研究員,2014年在新加坡國立大學(xué)獲得博士學(xué)位,致力于多媒體與機器視覺領(lǐng)域研究。張博士在CVPR、ICCV、ACM Multimedia、SIGIR、AAAI、TIP、TOMMCAP等多個頂級國際會議和期刊發(fā)表論文二十余篇(包括近十次口頭報告),曾獲得多媒體領(lǐng)域頂級國際會議ACM Multimedia最佳演示提名獎(2012),最佳學(xué)生論文獎(2013),以及新加坡國立大學(xué)計算機學(xué)院最佳博士論文獎(2014)。張博士擔(dān)任了Multimedia Tools and Applications、Neurocomputing等多個國際期刊編委或客座主編,亦是TIP、TMM、TCSVT、TOMMCAP等頂尖學(xué)術(shù)期刊審稿人。
報告摘要:Traditional feature learning requires a considerable amount of well-annotated data (e.g., ImageNet), whose construction per se is expensive and time-consuming. Unfortunately, these data hardly keep up with the ever-evolving trends in multimedia applications, such as the target domain shift and novel semantic concepts. In this talk, I will share our recent research progress in learning features from collective intelligence, which is naively collected from the inexhaustible Web user-generated contents and behaviors like Facebook "like", Google "click" and Pinterest "pin". In fact, our research is a more aggressive and practical implementation of weakly-supervised and unsupervised learning. We will explore several interesting tasks on how to discover meaningful semantics from user behaviors and try to find the underlying rationales. Last but not the least, some interesting future directions will be prospected.
太陽集團tyc5997