Unless artificial consciousness can be proven formally, judgments of the success of any implementation will depend on observation.

The Turing test is a proposal for identifying machine intelligence as determined by a machine's ability to interact with a person. In the Turing test one has to guess whether the entity one is interacting with is a machine or a human. An artificially conscious entity could only pass an equivalent test when it had itself passed beyond the imaginations of observers and entered into a meaningful relationship with them, and perhaps with fellow instances of itself.

A cat or dog would not be able to pass this test. It is highly likely that consciousness is not an exclusive property of humans. It is likely that a machine could be conscious and not be able to pass the Turing test. As mentioned above, the Chinese room argument attempts to debunk the validity of the Turing Test by showing that a machine can pass the test and yet not be conscious.

Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness.

Indeed, for those who argue for indirect perception no test of behaviour can prove or disprove the existence of consciousness because a conscious entity can have dreams and other features of an inner life. This point is made forcibly by those who stress the subjective nature of conscious experience such as Thomas Nagel who, in his essay, What is it like to be a bat?, argues that subjective experience cannot be reduced, because it cannot be objectively observed, but subjective experience is not in contradiction with physicalism.

Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, the failure of any particular test would not disprove consciousness. Ultimately it will only be possible to assess whether a machine is conscious when a universally accepted understanding of consciousness is available.

Another test of AC, in the opinion of some, should include a demonstration that machine can learn the ability to filter out certain stimuli in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The mechanisms that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of AC; since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, an AC should have outputs that indicate where its attention is focused at any one time, at least during the aforementioned test. By Antonio Chella from University of Palermo. "The mapping between the conceptual and the linguistic areas gives the interpretation of linguistic symbols in terms of conceptual structures. It is achieved through a focus of attention mechanism implemented by means of suitable recurrent neural networks with internal states. A sequential attentive mechanism is hypothesized that suitably scans the conceptual representation and, according to the hypotheses generated on the basis of previous knowledge, it predicts and detects the interesting events occurring in the scene. Hence, starting from the incoming information, such a mechanism generates expectations and it makes contexts in which hypotheses may be verified and, if necessary, adjusted."