![]() |
|
|
![]() |
Henry S. Baird Research on Human Interactive Proofs and CAPTCHAs --- A joint project with Avaya Labs Research ---
Networked computers are vulnerable
to cyber attacks in which programs---'bots, spiders, scrapers,
spammers, etc---pose as human users in order to abuse services offered
for individual
human use.
These abuses have included defrauding financial payment systems,
spamming, stealing
personal information, skewing rankings and recommendations, and
ticket-scalping---and new ones arise every month.
Efforts to defend against such attacks have, over the last six years, stimulated investigations into a broad family of security protocols designed to distinguish between human and machine users automatically over networks and via GUIs, called CAPTCHAs: Completely Automated Public Turing tests to tell Computers and Humans Apart. I have developed three generations of reading-based CAPTCHAs:
A somehat broader research area, 'human interactive proofs' (HIPs), may be defined as challenge/response protocols which allow a human to authenticate herself as a member of a given group. Prof. Dan Lopresti of Lehigh's CSE Dept has demonstrated a speech-based CAPTCHA using synthesized speech with confusing background noise [LKS02]. Prof. Lopresti and I co-organized the 2nd Int'l HIP Workshop held here at Lehigh University May 19-20, 2005. Virtually all commercial uses of CAPTCHAs exploit the gap in reading ability between humans and machines when confronted with degraded images of text. AltaVista, Yahoo!, PayPal, Microsoft, TicketMaster, and MailBlocks are among at least three dozen companies presently employing CAPTCHAs. The arms race between defenders of Web services and attackers is heating up. Many CAPTCHAs have been broken by computer-vision attacks. In our Pattern Recognition research lab, Prof. Lopresti and I are investigating a new generation of CAPTCHAs with the following properties.
[BBW05] H. S. Baird, Michael A. Moll, and Sui-Yu Wang, "ScatterType: a Legible but Hard-to-Segment CAPTCHA," Proc., IAPR 8th Int'l Conf. on Document Analysis and Recognition, Seoul, Korea, August 29 - September 1, 2005. [BB05] H. S. Baird and J. L. Bentley, "Implicit CAPTCHAs," Proc., SPIE/IS&T Conf. on Document Recognition and Retrieval XII (DR&R2005), San Jose, CA, January, 2005. [CBF03] A. L. Coates, H. S. Baird, R. J. Fateman, "PessimalPrint: A Reverse Turing Test," Int'l. J. on Document Analysis & Recognition, Vol. 5, pp. 158-163, 2003. [BC03] H. S. Baird and M. Chew, "BaffleText: a Human Interactive Proof," Proc., IS&T/SPIE Document Recognition & Retrieval X Conf. (DR&R2003), Santa Clara, CA, January 23-24, 2003. [LKS02] D. Lopresti, G. Kochanski, and C. Shih, "Human Interactive Proofs for spoken language interfaces," In Proc., 1st Workshop on Human Interactive Proofs, Palo Alto, CA, pp. 30-34, January 2002. |
![]() |
![]() |
||||||||||||||||||||||||||||||||||||
![]() |
![]() |