FastPass is a research project on developing a modular reference system for Automated Border Crossing (ABC), funded by the European Commission under its Seventh Framework Programme. Led by AIT, the consortium assembled key players in ABC, including component manufacturers, research organizations and governmental authorities. The project’s goal was to harmonize solutions towards faster, more fluent and convenient border crossing for passengers at air, sea and land border crossing points, considering legal and ethical requirements.
FastPass developed a new form of ABC gates for the air border scenario, which I led as scenario speaker: time-consuming document checks were shifted from the border gate to self-service kiosks using biometrics as a token for fast checks at the gate. Further technology innovations resulting in 43 research papers comprise: at-a-distance biometrics; counter-spoofing; harmonized passport scanning; queue-length estimation; and anti-tailgating technology. The land and sea border scenarios adapted the two-step token process and technology modules, such as making the e-Gate portable or integrating the kiosk in robotic terminals approaching a car’s window automatically. As a member of the University of Reading research team I was further co-managing the traveler identification and monitoring work package with the task of creating the iris, face and fingerprint biometric modules, integrating anti-spoofing, and surveillance components.
Modentity is a security research project trying to support identity and document checks with a smartphone (and extending its functionality by e.g. developing external illumination). The consortium led by AIT comprised a security document printer, an IT company, government agency and social research organization.
As result, Modentity delivered a fully-functional and integrated smartphone prototype application enabling a border guard or executive officer to conduct document verification using automated MRZ reading and chip readout for ICAO travel documents, as well as biometric data capture (face and fingerprint) and comparison using the embedded smartphone camera, as well as initiating security-related checks. As biometrics expert for the project I contributed by developing biometric segmentation and analysis, and further co-organised verification and validation, building up an internal dataset. Results were published at international conferences and as a by-product new insights of biometric comparisons between smartphone and traditional sensors were gathered.
BioSurveillance, my dissertation project, examines the multibiometric combination of iris and face recognition technology in unconstrained environments using a single camera sensor. The project has been funded by FIT-IT (project no. 819382).
The development of new iris recognition algorithms, especially for unconstrained environments, has been an active research topic with a broad range of civilian and governmental applications. Iris-based systems are among the most accurate solutions supporting large-scale identification, but used to require cooperative acquisition. In the project I developed next-generation iris segmentation techniques for semi-constrained capture (e.g. using a security camera at an ATM) as well as techniques for fast and accurate classifier fusion detecting face parts, and new fusion techniques combining iris and face recognition algorithms. The challenge was to provide competitive results even for visual images with significant quality degradation. I was involved in both, acquisition and execution of the project as its prime member under the supervision of my thesis advisor Prof. Uhl.
HandFootBio was my master’s thesis project conducted under supervisor Prof. Uhl. The task was to develop a single-sensor hand biometric system for authentication based on multiple modalities (hand geometry, fingerprint and palmprint) and to relate these techniques to a new biometric: the human foot.
For evaluation purposes, a custom database of 443 right-hand images and 160 right-foot samples was collected at the Department of Computer Sciences, University of Salzburg, Austria. Results indicated, that in verification mode (1:1 comparison with a claimed identity) footprint-based multimodal recognition could achieve recognition rates in the order of unimodal hand-based systems (less than 0.5% minimum half total error rate). Fusion techniques on multiple hand-based modalities resulted in even lower error rates. Whilst for traditional access control, single-sensor hand-based solutions seem to be a highly efficient choice, still foot biometrics provide a useful alternative for selected applications (e.g. newborn identification, or in spas or public baths, where footprints may be captured without the need to take off shoes or socks).
Started as a project, eTrain is an online exercise pool created by Wildnet for Austrian pupils aged 11-14 to train English grammar in their first four years of English as their premier foreign language. Whilst collections of exercises have been made available since 1998 and there is still work in progress to improve and implement additional modules for the eTrain development platform, a milestone for eTrain has been reached by 2007 with the complete reconstruction of the webapplication (based on Adobe Flash as frontend, Microsoft .NET code as backend and MS SQL Server 2000 database at the time).
As main software developer and co-founder of Wildnet, my role was to draft graphical design and create the web application framework including the exercise templates, which were used by other Wildnet team members to define these fun-to-solve grammar exercises.
Soccerbot is a joint class project for the embedded software engineering and robotics classes 2005 I took with my colleagues G. Klima and K. Szczurek.
While for the first project, the main objective was to design and implement a semi-autonomous robot using the Lego Mindstorm toolkit by employing principles learned in the embedded software engineering class (e.g. exact specification of timing behavior using Giotto), for the second project we aimed at providing an object recognition framework for an existing robot (Emma2) developed at the Department of Computer Sciences, University of Salzburg, Austria.
SoccerBot is a semi-autonomous robot based on a Lego Mindstorms (RCX) chassis and a web camera as the main sensory system for navigation. Motivated by soccer-playing robots (RoboCup) we constructed a robot able to recognize and catch a ball distinguishable from the environment by its color and shape.