SenseWall - An Open HCI Platform(6 years, 2 months ago)

"Multitouch technology has been a closely-guarded novelty, but they’re evolving into something else: a real, usable platform that focuses on content and not just gimmicks. In the process, a hard-working community is building richer, standards-based, cross-platform, free and open source tools. The result: faster iteration, broader access of artists to the technology, and soon, hopefully, better and better work." - Creative Digital Motion - June 2010 SenseBloom is proud to showcase their latest project, an HCI open platform called 'SenseWall'. SenseWall is a multitouch installation located at the Computer Science and Design department at the University of Coimbra in Portugal. This installation is an open platform so if anyone is interested in doing research on it or collaborating please let us know. This project is a excellent demonstration of the community's efforts combining CCV, PyMT and the CL-Eye Driver to achieve a large scale wall installation. All paired with open blueprints and discussion, hoping to help everyone in the community. Thanks to all the members that helped make this possible. In terms of hardware, the display has an area of 2.8m x 1.05m and it consists of 2 XGA ultra short-throw projectors amounting to a total resolution of 2048x768. For the multitouch sensing, this is an LLP setup using 8 infrared lasers, 2 PS3 Eye cameras and a custom compiled version of the excellent CCV tracker, giving us a touch resolution of 1280x480. Although it is a multitouch enabled surface it also has a camera at the top for computer vision applications, a microphone for sound input, speakers for sound output and an RFID reader (Touchatag) so hopefully these college students will exploit that, being in class courses or just for fun. A website and an "application launcher" will be available soon so that anyone can upload their apps to the SenseWall, and, this will be also opensourced. The main purpose with this installation is to let pupils learn new HCI concepts and highlight their creativity by giving them the tool to do so. And, like we said, we're also inviting anyone, and not just the college community, to submit any TUIO based app, or any other app that uses microphone/vision/RFID and we'll gladly send the creator a video of it. We do feel that it is also a great opportunity to showcase any artist/designer/coder out there. Join the Discussion  |   Project Page  |   Featured Article

CCV - Custom Object Tracker Preview Release(6 years, 2 months ago)

Introduction As a part of my involvement with GSoC 2010 project, I have been working with my mentor Pawel Solyga on "CCV-Custom Object Tracker". The project is not fully complete yet and has a lot of work/integration to do, but the object tracking part is completed so we are launching a preview release to get feedback from the community. CCV-COT or CCV Custom Object Tracker is the modified CCV to do the object tracking. This post assumes that you have a basic understanding of CCV, read Getting Started with CCV as I will be explaining only the difference CCV-COT has from the main branch of CCV. Make sure to download the latest preview release and let us know any thoughts or ideas. GUI Changes (Click to zoom) This diagram only mentions the difference it has from CCV 1.3, For a complete description refer to the CCV Overview Diagram Configuration & Calibration The configuration of this is pretty much similar to the Configuration of CCV 1.3. While configuring for fingers switch on the Finger Tracking, to be able to see id/outlines of fingers detected. There are some changes in the XML structure in config.xml, which end users need not worry about. 1) Use camera button [1] in the screenshot above to use the video camera. 2) The Calibration is same as the CCV Calibration Tutorial 3) First configure your CCV to track finger blobs and later we will do the configuration for objects tracking. 4) Change the settings in config.xml file which you can learn how to do on the CCV Wiki Tutorial. Till now is what mostly same as in CCV 1.3. Which leads us to the next steps which you can find below. Getting Started With Object Tracking Check that "template.xml" is present. Which is empty (if you have no template saved), or has template data in the following format. <TEMPLATE> <WIDTH>36.000000</WIDTH> <HEIGHT>55.000000</HEIGHT> <MINWIDTH>25.520044</MINWIDTH> <MINHEIGHT>38.988960</MINHEIGHT> <MAXWIDTH>46.833553</MAXWIDTH> <MAXHEIGHT>71.551262</MAXHEIGHT> <TRUEID>0</TRUEID> <ID>180</ID> </TEMPLATE> In case you want to assign a particular ID to this blob, change the "TRUEID" tag to 1 and "ID" tag to the particular ID you want to give this template. Say in the previous example, it would look like <TEMPLATE> <WIDTH>36.000000</WIDTH> <HEIGHT>55.000000</HEIGHT> <MINWIDTH>25.520044</MINWIDTH> <MINHEIGHT>38.988960</MINHEIGHT> <MAXWIDTH>46.833553</MAXWIDTH> <MAXHEIGHT>71.551262</MAXHEIGHT> <TRUEID>1</TRUEID> <ID>189</ID> </TEMPLATE> If you want to clear all templates, you have to delete all the lines from "template.xml" manually. For adding a new template, make sure that tracking "Objects" in "Track" panel is on. Put the object which you want to track on the surface. You should be able to see a binary image on the "Tracked" image panel (Right big one). Draw an rectangle with mouse drag from upper-left corner to bottom-right corner , surrounding the object (The red rectangle in the image). Be as close as possible. Then adjust the "Template Area" panel's minimum Area and maximum Area sliders to select the maximum and minimum variation of the contour (blue and green rectangle respectively). Then press "Enter" to add the template. Check the video for more details - If you want to change the ID of a template while running, then first Add the template using the previous instructions. Save Templates. Open the templates.xml file. Change the ID to the wanted ID number and TRUEID to 1 for the template. And then Load Templates. Make sure that your ID is in accordance of rules below, this allows for tracking of a mixture of objects with ranges for each group fids/fingers/objects ID Assignment Rules Code Changes - Moved to OpenFrameworks 0.61 precompiled library instead of source. - A new module in the sourcecode named "Templates" in ofxNCore. - A very simple tracking process now, which will be changed in next releases. - Modularized tracking of fingers/objects/fiducials. - Calculation time minimized, in finger tracking by removing extra calculation. Camera/Video toggle - Instead of having 2 buttons one button controls it. And yes, the crash on multiple switching between camera and video is fixed. Template Area Panel - The use will be described later in this post. This may change according to the algorithm used. Track Panel - Now you have a panel to choose what you want to track. Whether finger/objects/fiducial. Fiducial tracking is not yet integrated. Save/Load Template - These are two more buttons in the Settings Panel. As it says SAVE TEMPLATES saves template data stored, and LOAD TEMPLATES load the templates. The file used is "templates.xml". Coming Soon - Fiducial Integration. - Better Tracking Algorithm (The current one is very very crude). - CCV Debug Logging Mode (As logging is currently disabled). Join the Discussion  |   Getting Started  |   Download Now  |   Get the Source

CCA - Community Core Audio Preview Release(6 years, 2 months ago)

Introduction I am proud to announce the first release of CCA, which I have been working on with my mentor Mathieu Virbel. Community Core Audio or CCA is a GSoC 2010 project that shares similar GUI to CCV as well as underlying code base. The goal of CCA is to manage voice inputs, convert voice to text, and output resulting messages to network. The preview release of CCA (for Windows) is available for download here. We hope to get feedback from the community on this preview and look forward to the future results. Getting Started The current version only support command-picking mode. So do not click the "FREE SPEAKING MODE" button in this preview as it may cause application to crash. For detail of these two modes, please read: CCA Modes. Also the current version only support English digits because of the simple sphinx resources. 1) Select the check box "RECORD SOUND" to start recording. The waveform will be showed at the viewer window dynamically. 2) Un-select the check box "RECORD SOUND" or Click the "STOP" button to stop recording. 3) Select the check box "PLAY/PAUSE" to play, unselect it to pause. Click the "STOP" button to stop playing. 4) After recoding a audio, click the "SENT TO RECOGNIZE ENGINE", and the output viewer will display the sentence you just record. 5) You can click the "CLEAR SCREEN" button to clear the output viewer. Configuration For normally use, you do not need to do any configuration, what you need is just download and run it. However, CCA provide some options through config files. The most important config file is $cca_path/data/config.xml. If you want to use new sphinx resources, you must specify the path of new resource files in this XML file. To learn about resource files, please read: Sphinx Resource Files. The input audio sample rate was also set in config.xml. The input sample rate must be same as the sample rate of the Acoustics Model (AM). AM is a part of the resource files. Also the file $cca_path/data/commandList.txt is for CommandPicking mode. See this document: CCA Modes. Technical Detail We developed a stand alone oF addon for speech recognition, ofxASR, which was released several weeks ago. ofxASR is the core engine of CCA, and it can be applied on any oF application. Currently it use CMU Sphinx3 as its Automatic Speech Recognition (ASR) engine, but it also designed to use other ASR engine as well, such as Mac OSX Speech as all engine share the same interface. You can get the source of ofxASR here. Also a class named ofRectPrint was created to print lines of string in a rectangle with auto scroll and scroll up/down. Coming Soon - Ship better sphinx resources that support any English words instead of digits. - The free-speaking mode. - Output to network. - OSX and Linux support. Join the Discussion  |   Getting Started  |   Download Now  |   Get the Source

Introducing Go - Search, TV and More…(6 years, 2 months ago)

We are pleased to announce the beta release of NUI Group Go, a community search portal that will help community members find things faster. Go has several key purposes which are explained below: 1) Community search engine. 2) Media publishing systems. 3) URL shortening service. Search & Current A fast search engine with relevant results and helpful auto completion; There is even some tricks you can use to get results even quicker; For example if you use (nuigroup.com/go/keywords) you will automatically get results on page load. Also included in this release is the current page, which is the best way to stay up to date with all community activity and discussions. It parses and displays feeds gathered from around the community as well as showcases active topics keywords to help give users search ideas. Shortcut: http://nuigo.com/ Publish & Television Video is a great way to share a concept or vision... this being true we developed a light weight video publishing and viewing system to help the community share videos they enjoy. Publishing a video is simple just click the ~ and then click the Media link and place a Video Title and a valid Youtube ID (e.g. JJQcJBjObEc). Note: you must be logged in to publish. Shortcut: http://nuitv.com/ URL Shortening & Sharing A problem we noticed with our community and in the general web is lengthy URLs, so with this new shortening service you can shorten any URL with the click of a button. This results is a long URL becoming a short one like: (nuigc.com/a1) To shorten a URL you click the ~ and then click the URL link below search bar. You will be prompted to enter a URL as well as an optional Title which will customize your shortened URL with provided keywords. Once added your will be able to share you new URL via Twitter, Facebook etc... Shortcut: http://nuigc.com/ Themes There are currently two available themes for Go, the "Lite" theme which is built for optimized page loading and minimalism as well as the "Light" theme which is for rich content and interactivity. To choose a theme simply click the ~ and select a theme or via the links listed below: White  |   Black  |   Image  |   Interactive  |   Video Thanks & Credits We wanted to also thank several projects that have helped make this release possible, First the Google Search/Youtube APIs, jQuery, PHP as well as a big thanks to all the community members for contributing feedback and ideas on how to make better. Join the Discussion  |   Search  |   Watch TV

CCV 1.4 - Object & Fiducial Tracking(6 years, 2 months ago)

The summer is now over and Google Summer of Code made my summer one of the most amazing summers I have had. There are many people I want to thank for the successful completion of my project including Pawel Solyga (Mentor), Christian Moore, Jimmy Hertz, Sharath Patali, Rogier Mars, Tobias Drewry and many other users who took time to test and give feedback for the test version of CCV-COT. Please share your own feedback as well. Its a great pleasure for me to announce that my work will be integrated into CCV mainstream development and will be released with this release (1.4). Note : There has been some major changes in this release, so please read this fully before trying CCV-1.4. Also if you have not tried out CCV-COT , you should read this post prior to fully understand past changes. Changelog (After CCV-COT) Updates to UI Complete Fiducial Support (Integration of ofxFiducialFinder addon by Alain Ramos). XML messages bug solved. Dual Communication Modes (XML+TUIO UDP - Thanks MashineGun) CCV Debug mode Different Filter chain and controls for Fiducial and Object/Fingers. Fiducial settings on config.xml file. Object acceleration calculations included. removal and inclusion of some keyboard controls. Closing of application from the “X” button,minimizing the application. Blob counter for Finger, Fiducials and Objects respectively. Sample AS3 app to test Custom Object Tracker(COT) added. Fingers + Objects You can follow the post regarding CCV-COT and get started. The blob counter in the information window shows the count of finger blobs, object blobs and fiducial blobs respectively. Fingers + Fiducial Start the application. Configure (Adjust the filter and calibrate) it for Finger. Enable Fiducials mode in the “Track” Panel. Press “i”. This will take you to the Fiducial Control Mode. “i” is the filter toggle key. In the debug window you will see The “Filter” tag changing to "Fiducial" from "Finger/Object" (See the screenshot below). Now you can adjust the filters so that you can see clear fiducials on the Binary Image. Note : If Fiducial mode is not enabled in the “Track” panel , you will see either a blank image or a still image. For better understanding , think of it like this. The camera image is copied into two images, one of them is analyzed for Finger/Objects and other for fiducials. Initially (when Filter is “Finger/Object”, all the sliders and controls are used to control the image that is analyzed for “Finger/Object”. When you press “i”, now all the sliders and controls are used to control the image that is analyzed for fiducials. The blob counter in the information window shows the count of finger blobs, object blobs and fiducial blobs respectively. Fingers + Objects + Fiducial We initially disabled because Fiducial tracking solves the problem of Object tracking. But again , we decided to give the user the freedom to choose what they have to track. In this mode, most likely Objects and Fiducial will be confused (e.g. the image below) But of course you need to be very lucky to get this mode working. A lot of adjustment will be required. But it is worth trying out. We currently have source for Windows and Linux and are looking for OSX developer and are excited for next year's GSoC can anyone say AR? Also we are currently migrating the main CCV nuicode project so the SVN and URL paths properly reflect the project terms... so please pardon our dust and you can see a quick TODO here... Below is a a great video by Rogier Mars showcasing the latest CCV as well as an image from Jimmy from his DI Tabletop. Join the Discussion  |   COT Preview  |   Download Now  |   Get the Source

Google Summer of Code 2011(6 years, 2 months ago)

Join the Discussion  |   Projects Showcase  |   Getting started with GsoC...

Google Summer of Code 2011 - Projects(6 years, 2 months ago)

After another tough selection period, The NUI Group Mentors have selected the 6 winning proposals for GSoC 2010. It was a very competitive and hard process as nearly 30 proposals have been received, many of which were very professional. Here are the proposals that were selected: http://nuigc.com/gsoc2011org Congratulations to all the Google Summer of Code students who were accepted this year, We hope for another pleasant, successful and productive GSoC for our students and community, and wish good luck to our students in their code writing until August on the suggested “pencils down” day. Also truly thanks for all who participated and we hope that you will continue to develop your projects with equal support from any NUI Group Community member. Please check GSoC 2011 page to get information about latest developments. Google Summer of Code Homepage   |   Project Updates

CCV 1.4.1a - Multicamera Preview(6 years, 2 months ago)

Community Core Vision v1.4.1a Community Core Vision (CCV) is a proven computer vision solution with the best community support group around. Throughly tested in both research and commercial environments, CCV is a great starting point for anyone to begin learning and implementing computer vision systems. We are pleased to announce the first official release of CCV with Multicamera support. A special thanks goes to community developer Anatoly (Anat) for his hard work, passion and contributions. This release is mainly for testing hardware capabilities. Users should be carefull with XML settings as their camera may not support specific tags. There are several sample configurations included with the download to get started. A modern (i5+) computer is recommended especially when processing high resolution/framerate video. We have also created a short feedback survey to get a better understanding of the community's experience with these new features. The latest release which can be downloaded here with the following features: True Multicamera support with enhanced stitching algorithm. Fully Multithreaded capturing from cameras. Add support of Interleave mode for stitching/blending. Video Recorder to capture from cameras and playback emulation mode. Video Mashup - different devices types to be used simultaneously. DirectShow, Firefly, Kinect and PS3 Eye support. New per camera based calibration process. Changes GPU Mode was removed (Expected to return in 1.5) Simple video playback removed (Refactored in 1.5) Camera settings dialog was removed. Restriction caused with new camera selection logic. Upcoming A multi-camera positioning and settings tool intergrated into CCV. Improved perfomance through GPU acceleration and SSE instructions. Update to the latest OpenCV Expect more to come as the community continues to contribute to this pioneering open source project during Google Summer of Code 2011 and beyond. Download CCV  |   Feedback Survey  |   Get the Source  |   Join the Discussion

CCV 1.5 Official Release(6 years, 2 months ago)

We are proud to release the latest version of CCV - our goal with this release is to offer stable multi-camera support, enhance code quality and performance. These updates are a result of our Google Summer of Code involvement this year. A big thanks goes out to my student Yishi Gou who developed a new GUI based grid system which allows for much easier camera layout with up to a 16 cameras. Beyond the new GUI views we worked very hard on testing a wide range of cameras types (CMU, DirectShow, PS3, Kinect, Firewire, etc...) and adding more robust settings. Below you can find more demo videos, the installer for this release and as always please share your own feedback and any feature requests. This version contains many changes in both features and the codebase: New Features Automatic Camera Detection New Camera Layout Editor with Drag/Drop GUI and Device list. Per Camera - Calibration, Preview mode and Settings Dialogs. Optimized Tracking and Stiching algorithms. Interleave mode for Stitching/Blending. Optimized Fiducial Tracking Migrated to TUIO 1.1 version for blob support. Fully threaded Capture and Stiching. Allows different cameras types to be used simultaneously. Dynamic Threshold Option (Gives better tracking results in certain lighting scenarios) Updated cameras supported (CMU, DirectShow, Firefly, Kinect and PS3 Eye) Changes Added ofxAddons for CMU, DirectShow, FFMW and Multiplexer Complete Abstraction of Camera Devices with new CameraBase Class Updates to Multiplexer integrating new Camera Layout Tool Rewrote some of PS3 Wrapper I am currently traveling for the GSoC Mentor Summit in California and will return in one week to finalize the code/publish the updates to the project repository. Also I have been working at CCM - a management utility that allows CCV to run a service on Windows machines. This enables full interaction between CCV and any application which supports WM_TOUCH including Microsoft Surface SDK 2.0 with fidicual mapping to MS Surface tags. We really appreciate all community members feedback that we have received from CCV 1.4.1, Without your participation and testing we could not have made these improvements. Also thanks to community memebers Mathias Griffe and Sam & Arron at Touchmi for making the awesome videos showcasing the new features in this release. Join the Discussion  |   Download Installer  |   Get the Source

Highly Deformable Mobile Devices with Physical Controls(6 years, 2 months ago)

Paddle Overview: Touch screens have been widely adopted in mobile devices. Although touch input is very flexible in that it can be used for a wide variety of applications on mobile devices, touch screens does not provide physical affordances, encourage eyes-free use or utilize the full dexterity of our hands due to the lack of physical controls. On the other hand, physical controls are often tailored to the task at hand, making them less flexible and therefore less suitable for general purpose use in mobile settings. In this paper, we show how to combine the flexibility of touch screens with the physical qualities that real world controls provide in a mobile context. We do so using a deformable device that can be transformed into various special-purpose physical controls. References: Raf Ramakers, Johannes Schöning, Kris Luyten. Paddle: Highly Deformable Mobile Devices with Physical Controls. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14) (to appear). Learn More...