久久久成人毛片无码,成人做爰A片免费看黄冈美女直播,精品国产乱码久久久久久1区2区 ,精品偷拍被偷拍在线观看

X
...

通知公告

學(xué)術(shù)報(bào)告通知(編號(hào):2016-25)

發(fā)布時(shí)間:2016-08-11 瀏覽次數(shù):

報(bào)告題目: Person Re-identification: Benchmarks and Our Solutions

報(bào)告人: 田奇 教授

單位:美國(guó)德州大學(xué)圣安東尼奧分校

報(bào)告時(shí)間: 2016年8月13日下午15:00-16:30

報(bào)告地點(diǎn): 屯溪路校區(qū)逸夫科教樓408會(huì)議室

報(bào)告摘要: Person re-identification (re-id) is a promising way towards automatic video surveillance. As research hotspot in recent years, there has been an urgent demand for building a solid benchmarking framework, including comprehensive datasets and effective baselines.

To benchmark a large scale person re-id dataset, we propose a new high quality frame-based dataset for person re-identification titled “Market-1501”, which contains over 32,000 annotated bounding boxes, plus a distractor set of over 500K images. Different from traditional datasets which use hand-drawn bounding boxes that are unavailable under realistic settings, we produce the dataset with Deformable Part Model (DPM) as pedestrian detector. Moreover, this dataset is collected in an open system, where each identity has multiple images under each camera. We propose an unsupervised Bag-of-Words representation and treat the person re-identification as a special task of image search, which is demonstrated very efficient and effective.

To further push the person re-identification to practical applications, we propose a new video based dataset titled “MARS”, which is the largest video re-id dataset to date. Containing 1,261 identities and over 20,000 tracklets, it provides rich visual information compared to image-based datasets. The tracklets are automatically generated by the DPM as pedestrian detector and the GMMCP tracker. Extensive evaluation of the state-of-the-art methods including the space-time descriptors are presented. We further show that CNN in classification mode can be trained from scratch using the consecutive bounding boxes of each identity.

Finally, we present “Person Re-identification in the Wild (PRW)” dataset for evaluating end-to-end re-id methods from raw video frames to the identification results. We address the performance of various combinations of detectors and recognizers, mechanisms for pedestrian detection to help improve overall re-identification accuracy and assessing the effectiveness of different detectors for re-identification. A discriminatively trained ID-discriminative Embedding (IDE) in the person subspace using convolutional neural network (CNN) features and a Confidence Weighted Similarity (CWS) metric that incorporates detection scores into similarity measurement are introduced to aid the identification.

報(bào)告人簡(jiǎn)介:

田奇博士于1992年在清華大學(xué)獲學(xué)士學(xué)位,于2002年在美國(guó)伊利諾伊大學(xué)香檳城分校(UIUC)獲得博士學(xué)位。田博士曾在微軟亞洲研究院媒體計(jì)算組工作,任職首席研究員。田博士現(xiàn)為美國(guó)德州大學(xué)圣安東尼奧分校計(jì)算機(jī)系正教授。

田博士在美國(guó)ARO、NSF、DHS、Google、NEC、HP等科研項(xiàng)目支持下,在多媒體信息檢索、計(jì)算機(jī)視覺(jué)、模式識(shí)別、生物信息學(xué)等領(lǐng)域開(kāi)展了廣泛深入的研究,在國(guó)內(nèi)外學(xué)術(shù)期刊和會(huì)議上發(fā)表論文超過(guò)320篇,取得了重要的學(xué)術(shù)進(jìn)展和廣泛的同行關(guān)注,獲得了ICMR、ICME、PCM、MMM、ICIMCS、ICASSP等國(guó)際著名學(xué)術(shù)會(huì)議最佳論文/最佳學(xué)生論文獎(jiǎng)。

田奇博士是國(guó)際著名學(xué)術(shù)期刊IEEE T-MM、T-CSVT、MMSJ、MVA的副主編或編委會(huì)成員,亦擔(dān)任過(guò)IEEE T-MM、CVIU等著名學(xué)術(shù)期刊客座主編之職。田博士于2010年獲ACM學(xué)會(huì)學(xué)術(shù)服務(wù)獎(jiǎng),2014年獲國(guó)家基金委海外杰青,2016年當(dāng)選IEEE Fellow。

太陽(yáng)集團(tuán)tyc5997

學(xué)院地址:安徽省合肥市蜀山區(qū)丹霞路485號(hào)(太陽(yáng)集團(tuán)tyc5997翡翠湖校區(qū))
郵編:230601 聯(lián)系電話:0551-6290 1380
Copyright @ 2023 中國(guó)·太陽(yáng)集團(tuán)tyc5997(股份)有限公司 皖公網(wǎng)安備 34011102000080號(hào) 皖I(lǐng)CP備05018251號(hào)-1
TOP