art center nabi

KOR


글로벌 AI 해카톤



첨부이미지



글로벌 AI 해카톤 Global AI Hackathon


기간 : 2016.12.01.(목) - 12.04.(일)
장소 : 코엑스 B홀 (2016 창조경제박람회)
주최/주관 : 미래창조과학부/ 아트센터 나비 한국과학창의재단
파트너 : IBM Watson
구성 : 해카톤, 전시, 토크 콘서트


[연계행사 1 / 토크 콘서트]

AI와 휴머니티(Humanity) - 인공지능과 우리의 미래
일시 : 2016.12.03(토) 10:00 - 12:00
장소 : 코엑스 B홀 메인 무대
사전신청 링크

http://creativekorea-expo.or.kr/unitEvent/request

 

 

[연계행사 2 / 해카톤 파이널 프레젠테이션]
일시 : 2016.12.04(일) 10:00 - 12:00
장소 : 코엑스 B홀 내 글로벌 AI 해카톤 부스 



◇ 참여자

 

골드스미스대학교 Goldsmiths, University of London / Creative Computing
믹 그리어슨 Mick GRIERSON, 레베카 피어브링크 Rebecca FIERBRINK, 하딜 아욥 Hadeel AYOUB, 제이쿱 피알라 Jakub FIALA, 레온 페돈 Leon FEDDON

뉴욕대학교 New York University / Interactive Telecommunication Program
진 코건 Gene KOGAN

스쿨오브엠에이 School of Machines, Making & Make-Believe / The Neural Aesthetic

안드레아 레프스가르드 Andreas REFSGAARD

서울대학교 Seoul National University / Biointelligence lab
장병탁 Byoung-Tak ZHANG, 김은솔 Eun-Sol KIM, 온경운 Kyoung Woon ON, 이상우 Sang-Woo LEE, 곽동현 Donghyun KWAK, 허유정 Yu-Jung HEO, 강우영 Wooyoung KANG, 재이다 츠나렐 Ceyda ÇINAREL, 전재현 Jaehyun JUN, 김기범 Kibeom KIM

아트센터 나비 Art Center Nabi / Nabi E.I. Lab
조영각 Youngkak CHO, 조영탁 Youngtak CHO, 김정환 Junghwan KIM, 유유미 Yumi YU 

조지아공과대학교 Georgia Institute of Technology / Center of Music Technology

길 와인버그 Gil WEINBERG, 메이슨 브레튼 Mason BRETAN, 시 첸 Si CHEN  

홍콩과학기술대학교 Hong Kong University of Science and Technology / Department of Computer Science and Engineering
틴 야우 곽 Tin Yau KWOK, 김민삼 Minsam KIM 

홍콩시티 대학교  City University of Hong Kong / Department of Electronic Engineering
호 만 찬 Ho Man CHAN, 치 쉬 Qi SHE


 

아트센터 나비는 ‘Artificial Intelligence for Social Care’를 주제로 ‘2016 창조경제박람회’(12.1~4일, 서울 코엑스)에서 글로벌 AI(Artificial Intelligence) 해카톤을 개최합니다.

 

이번 해카톤은 국내 최초 글로벌 AI 해카톤으로 한국, 미국, 영국, 네덜란드, 홍콩 등 총 5개국의 인공지능 관련 창작자(Maker) 약 30여 명이 참여하며, 행사는 크게 해카톤(Hackathon), 전시 그리고 토크 콘서트 세 가지로 구성될 예정입니다.

서울대학교부터 영국 골드스미스, 미국 조지아 공대와 뉴욕대, 홍콩 과학기술대학교, 홍콩 시티대학교 등 세계 유수 대학의 인공지능 관련 연구실뿐만 아니라 아트센터 나비 창 제작 연구소 나비 E.I.Lab 역시 참여하는 이번 해카톤에서 창작자들은 “소셜 케어(Social Care)”라는 주제 아래 인공지능 기술이 가져올 사회 문화적 역할에 대하여 제안하고자 합니다. 

이번 AI 해카톤은 지난 9월 말에 개최된 온라인 AI 해카톤(아이디어톤)에 이은 두 번째 해카톤으로, 지난 해카톤에서 제안된 팀별 아이디어를 구체화, 실제 창제작하여 프로토타입 결과물로 도출하는 과정으로 진행됩니다. 해카톤 현장은 오픈 스튜디오 형태로 대중에게 모든 제작 과정이 공개되며, 해카톤의 결과물은 창조경제박람회의 마지막 날인 4일(일) 오전 파이널 프레젠테이션을 통해 공개됩니다. IBM 왓슨(Watson)과의 파트너 십을 맺고 진행되는 이번 해카톤에서 창작자들 중 일부는 왓슨 API 를 활용한 인공지능 창작물을 제작공개할 예정입니다. 

12월 3일 오전에는 B홀 대형무대에서 ‘AI와 휴머니티’라는 제목의 토크 콘서트가 진행됩니다. 이번 토크 콘서트는 ‘인공지능과 우리의 미래’라는 주제 아래 학계를 중심으로 산업, 예술 분야의 전문가 4인이 함께 할 예정입니다. 아트센터 나비는 이번 해카톤에서 인지 혁명 시대를 맞아 인공지능 기술을 활용한 국내외 인공지능 창작자들(AI Maker)의 창작물을 통해 인공지능 기술의 사회 문화적 역할 및 가능성을 제시하고자 합니다. 뿐만 아니라 인공지능을 주제로 한 관객참여형 전시와 토크 콘서트를 통해 기술 활용에 대한 인간의 역할과 관계에 대해서 생각해볼 수 있는 기회의 장을 마련하고자 합니다.



참여팀 및 프로젝트 소개

첨부이미지



골드스미스
Goldsmiths, University of London

 

프로젝트명
Bright Sign Glove
 
키워드
Assistive Technology, Wearable Technology, Healthcare Innovation, Machine Learning, Gesture Recognition.  
 
컨셉 및 기획의도
While accessible technology is becoming more and more common in recent times, the efforts to break the communication barrier between sign language users and non-users are yet to leave the world of academia and high-end corporate research facilities. Inspired by and building on Hadeel Ayoub’s research to date, we have  strived to build a wearable, integrated and intuitive interface for translating sign language in real-time.
 
기술소개
The Sign Language Data Glove is equipped with flex and rotation sensors and accelerometers that provide real-time gesture information to a classification system running on an embedded chip. We have used various IBM Bluemix services to perform text-to-speech conversion and language translation. The embedded system include a speaker and a screen to display and speak detected words. 
From a UX viewpoint, we are aiming to crx-eate a smooth experience minimizing the setup and maintenance interactions by providing a set of utility gestures. We have used IBM cloud services and local storage to transfer and cache data to enable offline usage. We have built a simple web application to manage custom gestures uploaded from the embedded system to the IBM cloud.
 
Material

IBM Watson Bluemix Text to Speech API and Language Translation API, Raspberry PI Zero, IBM Watson Python SDK, Flex sensors, Gyroscope, mini OLED screen, Mini Speaker, Smart textile gloves

 
 
팀소개
Hadeel Ayoub, PhD researcher in Arts and Computational Technology, Jakub Fiala MSci Creative Computing, Leon Fedden, Creative Computing undergraduate. Hadeel, Leon and Jakub are researchers from London, based at  Goldsmiths, University of London and the Welcome Trust and working in the fields of accessible technology, creative computing and machine intelligence. 




첨부이미지


 

진코건 & 안드레아 레프스가르드 
Gene Kogan & Andreas Refsgaard
 
프로젝트명
Doodle Tunes
 
키워드
머신러닝 / 음악
 
컨셉 및 기획의도
This project lets you turn doodles (drawings) of musical instruments into actual music. A camera looks at your drawing, detects instruments that you have drawn, and begins playing electronic music with those instruments.
 
기술소개
It’s a software application, built with openFrameworks, that uses computer vision (OpenCV) and convolutional neural networks (ofxCcv) to analyze a picture of a piece of paper where instruments have been hand drawn, including bass guitars, saxophones, keyboards, and drums.
The classifications are made using a convolutional neural network trained on ImageNet, and sends OSC messages to Ableton Live, launching various clips playing the music for each instrument.
The software will be made available at https://github.com/ml4a/ml4a-ofx
 
Material
openFrameworks (including mainly ofxOpenCv, ofxCv, ofxCcv, ofxAbletonLive), a stand and attached web camera, some paper, and pens.
 
팀소개

Andreas Refsgaard and Gene Kogan are artists, programmers, and interaction designers, working with machine learning to crx-eate real-time tools for artistic and musical exx-pression.


첨부이미지



서울대학교
Seoul National University 
 
프로젝트명
AI Scheduler: Learn your life, Build your life
 
키워드
Automatic scheduling / Daily life pattern / Deep learning / Inverse reinforcement learning / Wearable device 
 
컨셉 및 기획의도
In this project, we explore the possibility of an AI assistant that will have a part in your life in the future. Being able to predict your daily life patterns is an obvious necessary function of an AI assistant. In this respect, we focus on the theoretical issue of how to learn the daily life pattern of a person. For this hackathon, we developed a system which can automatically recognize the current activity of a user and learn activity patterns of the user, then predict future activity sequence.
 
기술소개
From wearable camera and smart phone data, the system can automatically recognizes user’s current activity, time and location information. For this part, the visual recognition API of Watson and several machine learning algorithms were used. Based on this information, the system learns the patterns of user’s daily life. We devised the learning algorithm based on ‘inverse reinforcement learning’ theory so that the life pattern of the user can be learned properly. Then, the system can generate life pattern of future. Also, as a scheduler, it is desirable for the system to interact with the user and reflect the user’s intention. Therefore, we developed a web-based interface which can interact with the user by natural language. For this, IBM Watson’s conversion service alongside with text-to-speech/speech-to-text was used.
 
Material
TensorFlow, VGG Net, wearable camera, web-based interface (IBM Bluemix Conversation Service, speech to Text Service, Text to Speech Service, Node-Red).
 
팀소개
We are graduate students at the Seoul National University, Biointelligence Laboratory. We are interested in the study of artificial intelligence and machine learning on the basis of biological and bio-inspired information technologies, and its application to real-world problems. 


첨부이미지



 
아트센터 나비 
Nabi E.I.Lab (Emotional Intelligence Laboratory)

프로젝트명
A.I interactive therapy
 
키워드
Artificial Intelligence, Art color therapy(CRR TEST), New media art, Interaction, IBM Watson
 
컨셉 및 기획의도
A.I. interactive therapy attempts to analyze human psychology and emotion through artificial intelligence. This artificial intelligence system is an interaction type in which the client conducts psychological counseling through direct physical behavior. This system is based on a creative approach to the imagery of art therapy. It is also designed to analyze the emotional stability and inner aspect of human psychology by making full use of the interactive characteristics and visual effects of new media art.

A.I Interactive therapy는 인공지능을 통해 인간의 심리와 감성에 대한 분석을 시도한다. 이 인공지능 시스템은 내담자가 직접적인 신체적 행위를 통해 심리 상담을 진행하는 인터랙션 타입이다. 이 시스템은 아트 테라피의 창조적 심상에 대한 접근을 바탕으로 한다. 또한 뉴 미디어 아트의 인터랙티브적 특성과 시각적 효과를 최대한 활용하여, 인간의 심리에 정서적 안정과 내면을 분석하기 위해 고안되었다.
 
기술소개
This project is crx-eated based on CRR TEST which is a color psychological analysis method. The test analyzes the mental state of the subject by selecting three out of eight plane figures in order. This project has been completed by grafting artificial intelligence onto this method. First, a UI environment has been made by an application for vertical projection on the floor. A kinetic camera has been installed to allow the user to experience the UI environment in real time so that tracking of human movements can be made for choosing movement values and interacting to the values. A voice feedback system has been constructed for processing specific results and overall operation of the system. This proceeds with IBM Watson's conversation API. For this, a sort of chatbot system is employed, and voice feedback technology makes decision on the progress of the process according to IBM Watson's STT / TTS. The calculation of the result value based on the final selection is also made through artificial intelligence. The language for this application is Python.

이 프로젝트는 색채심리분석법인 CRR TEST를 바탕으로 만들어 졌다. 이 테스트는 8개의 도형 중 3가지 도형을 순서대로 선택함에 따라 대상의 심리상태를 분석한다. 이 테스트를 인공지능과 접목하여 프로젝트를 완성하였다. 먼저 수직 프로젝션을 통해 바닥에 투사되어 보여지는 어플리케이션을 UI 환경으로 구성하였다. UI 환경을 실제 내담자 즉 사용자가 체험하도록 키넥트 카메라가 설치되어 사람의 움직임을 트래킹하여 이동값에 따른 선택과 인터랙션이 가능하도록 설계하였다. 특정 결과와 전체적인 시스템의 운영은 음성 피드백 시스템으로 구성된다. 이는 ibm watson의 conversation api를 통해 진행된다. 일종의 챗봇 시스템으로 구성하되 사용자의 음성 피드백 기술은 ibm watson의 STT/TTS 에 따라 과정의 진행여부를 판단한다. 최종적인 선택에 따른 결과값의 산출 또한 인공지능을 통해 발현된다. 이 어플리케이션의 구성 언어는 Python이다.
 
Material
Hardware : Projector, Kinect v2 camera, Mac pc, Speaker
Software : IBM Watson api, Python, Pykinect2
 
팀소개
E.I. Lab at the Art center Nabi is a creative production laboratory that researches and tests the contact points of art and technology. E.I. Lab stands for Emotional Intelligence Laboratory and focuses on creating new contents through emotional approach and technical research. The members are New media artist Youngkak Cho, Software developer Youngtak Cho, Designer Junghwan Kim and Interaction designer Yumi Yu. The lab is currently researching and developing fusion projects based on robotics and artificial intelligence technologies. 


첨부이미지




조지아공과대학
Georgia Institute of Technology

프로젝트명
Continuous Robotic Finger Control for Piano Performance
 
키워드
로보틱스, 딥러닝, 머신러닝, 컴퓨터비전, 로봇뮤지션 
 
컨셉 및 기획의도
This project uses deep neural networks to predict fine-grained finger movements based on muscle movement data from the forearm. In particular, we look at the fine motor skill of playing the piano, which requires dexterity and continuous, rather than discrete, finger movements. While this project is most directly applicable to giving musicians with amputations their musical ability back through smarter prosthetics, the demonstration of successful fine motor skills from our deep learning technique allows for promising applications of this method and our novel sensor throughout the medical and prosthetic fields.
 
기술소개
The final deep learning techniques used to successfully demonstrate the continuous robotic finger control was a four lax-yer fully connected network followed by cosine similarity and a softmax loss. Images were normalized before input and batch normalization was used to smooth out the regression results. In post-processing, noisy regression results were further smoothed with a filtering step. To implement this network and run our prior experiments, we used Tensorflow, Torch, and pre-trained networks from Inception to construct our deep networks.
 
Material
Deep learning libraries: TensorFlow, Inception, Torch
Data collection: glove bend sensor, MIDI output from keyboard, muscle sensor
 
팀소개
Mason Bretan and Si Chen are PhD students working in robotic musicianship and computer vision, respectively. Gil Weinberg is a Professor in the School of Music and the founding director of the Center for Music Technology. They are based at the Georgia Institute of Technology, within the Center for Music Technology and the College of Computing, School of Interactive Computing.

 

첨부이미지



홍콩시티대학교 & 홍콩과학기술대학교
City University of Hong Kong & Hong Kong University of Science and Technology
 

Cognitive DJ
 
키워드
IBM Watson, tone analysis, conversation, emotion vector, music recommendation
 
컨셉 및 기획의도
The project aims at developing a novel Cognitive DJ for users based on their emotion. Current music recommendation system requires the user to search the exact music manually and/or is based on user history. However, such existing approach is unable to reflect or meet their emotional needs in real-time. By interacting with Cognitive DJ through text or speech, the system will recommend music based on the current emotion status. The Cognitive DJ can analyze the conversation content and the tone for each user to recommend the right song at the right moment.
 
기술소개
We have established a music database, which includes songs from diverse categories. The IBM Watson tone analysis is applied to the lyrics of each song to derive the emotion vector with 5 different scores, indicating anger, disgust, fear, joy, and sadness. Based on conversation service provided in IBM Watson, we have crx-eated our workspace including intends, entities, and dialog. The conversation with the Cognitive DJ is carried through an user-friendly interface. Based on the user’s conversation with the Cognitive DJ, the emotion vector of the user is derived and compared with the emotion vectors of all songs. A song that has closest emotion score with that of the user will be recommended. New recommendations will be provided as the user continues the conversation with the Cognitive DJ.
 
Material
Python, Java, IBM Watson.
 
팀소개
Qi She is a Ph.D. student from the Department of Electronic Engineering (EE) of City University of Hong Kong (CityU), working on statistical machine learning methods for neural data analysis. Minsam Kim, Sikun Lin, and Wenxiao Zhang are graduate students from the Department of Computer Science and Engineering (CSE) of Hong Kong University of Science and Technology (HKUST), working on machine learning and time series data analysis.
The team’s mentors are Dr. James Kwok from HKUST and Dr. Rosa Chan from CityU. Dr. James Kwok is the Professor of CSE at HKUST working on machine learning and neural networks. Dr. Rosa Chan is currently an Assistant Professor of EE at CityU. Her research focuses on computational neuroscience and neural interface.