Note for "Thesis - Behavior of Machine Learning Algorithms in Adversarial Environments.pdf"(1)

2016/2/4 16:55 下午 posted in  Adversary Learning

1.1 Motivation and Methodology

Learning approach is well-suited to the scenario when:

  1. The process is too complex to designed for human operator
  2. Requirement of dynamical development

An intelligent adversary can:

  • Alter his approach based on knowledge of the learner’s shortcomings
  • Mislead it by cleverly crafting data to corrupt
  • Deceive the learning process

Potential dangers posed to a learning system:

  • An attacker can exploit the nature of a machine learning system to mis-train it and cause it to fail

The questions raised by author:

  • What techniques can a patient adversary use to mis-train or evade a learning system?
  • How can system designers assess the vulnerability of their system to vigilantly incorporate trustworthy learning methods?

An algorithm’s performance depends on:

  • The constraints placed on the adversary
  • The job the algorithm is tasked with performing

This raises two fundamental questions:

  • How can we evaluate a learner’s performance in adversarial environment?
  • How to design or select a learner which can be satisfied for its performance in particular environment?

Example 1.1

How spammer corrupt the learning mechanism:

  1. use information about the email distribution to construct clever attack spam messages
  2. will cause the spam filter to misclassify the user’s desired messages as spam.
  3. to cause the filter to become so unreliable

Example 1.2

The ANTIDOTE’s feature:

  • Better resistance within the poisoned environment
  • But Less effective on non-poisoned environment

Example 1.3

The means to evade the filter:

  • obfuscating words indicative of spam to human-recognizable misspellings; e.g., “Viagra” to“V1@gra” or “Cialis” to “Gia|is”
  • using clever HTML to make the content difficult to parse
  • adding words or text from other sources unrelated to the spam
  • embedding images that contains the spam message.

1.2 Guidelines from Computer Security

Author’s principles:

  • Proactively Analysis
  • Kerckhoffs’ Principle
  • Conservative Design
  • Threat Modeling

Proactive Analysis:

Proactively find the vulnerabilities of learning system before the it is deployed or widely used.

Kerckhoffs' Principle:

Do not let a system’s security rely on secrets. If the secrets are exposed, the system is immediately compromised.

So apply this principle into machine learning, we should assume the adversary is aware of the learning algorithm and can obtain some data used to train the model.

Conservative Design:

When access the security of a system, we should avoid to put limit on adversary’s behavior. We should assume that the adversary has the broadest possible powers.

Conversely, though the adversary too powerfully may lead to an inappropriate assessment on the system.

Threat Modeling:

A completely secure system is infeasible. So author qualified the systems with degree of security -—the level of security expected against an adversary based on a threat model with a certain set of:

  • objectives
  • capabilities
  • incentives

To construct a threat model for a particular learning system:

  1. Quantifies the security setting and objectives of that system, to develop criteria to measure success and quantify the level of security offered.
  2. Formalizing the risks and objectives, to identify potential limitations of the system and potential attacks.
  3. Identifies potential adversarial goals, resources and limitations.

To evaluating a system:

  1. Determining classed of attacks on the system.
  2. Evaluating the resilience of the system against those attacks
  3. Strengthening the system against those classes of attacks.

1.3 Historical Roadmap

Some experience of author when developing this thesis, seems irrelevant to the mainstream.

1.4 Dissertation Organization

As the title, no useful informations.