The question in the Lenz case turned on whether the "good faith belief that use of the material... is not authorized by... the law" [17 U.S.C. § 512(c)(3)(A)(v)] requirement requires the potential complainant to consider Fair Use. The defendants argued that it should not. Judge Fogel disagreed, concluding:
The DMCA already requires copyright owners to make an initial review of the potentially infringing material prior to sending a takedown notice; indeed, it would be impossible to meet any of the requirements of Section 512(c) without doing so. A consideration of the applicability of the fair use doctrine simply is part of that initial review.This is interesting in itself, particularly when you add the extra step and connect it to the liability under Section 512(f); which Judge Fogel does (accepting the plaintiff's claim):
An allegation that a copyright owner acted in bad faith by issuing a takedown notice without proper consideration of the fair use doctrine thus is sufficient to state a misrepresentation claim pursuant to Section 512(f) of the DMCA.But there's an additional implication. There are systems out there that promise to automate the identification of infringing content, and the delivery of takedown notices. But, as David Robinson of Freedom to Tinker notes, "it’s hard to imagine a computer performing the four-factor weighing test that informs a fair use determination." So, if this judgment stands, fully automated takedown notices may be illegitimate, and potentially vulnerable to counter-claims.
Since Bill C-61 proposes to establish a conceptually similar liability shield for ISPs in Canadian copyright law, one might be inclined to wonder whether the same argument would hold. The requirements for notices of infringement in this bill are as follows.
(2) A notice of claimed infringement shall be in writing in the form, if any, prescribed by regulation and shallI'm no lawyer, but I don't see anything like a "good faith" requirement. Such a requirement could be built in via regulation, but it's not explicitly present. One might imagine reading something into "the infringement that is claimed", but on its face it provides no standard that the claim can be judged against. So, fully automated notice bots would seem to be permissible, provided they can satisfy the (unspecified) regulations. Certainly these would be capable of producing a claim of infringement in the required form, if it doesn't need to be accurate, or even reasonable.
(a) state the claimant’s name and address and any other particulars prescribed by regulation that enable communication with the claimant;
(b) identify the work or other subject-matter to which the claimed infringement relates;
(c) state the claimant’s interest or right with respect to the copyright in the work or other subject-matter;
(d) specify the location data for the electronic location to which the claimed infringement relates;
(e) specify the infringement that is claimed;
(f) specify the date and time of the commission of the claimed infringement; and
(g) contain any other information that may be prescribed by regulation.
Michael Geist has noted that C-61 specifies no penalty for filing a false notice. One could argue that such a penalty is unnecessary, since the consequences of an abusive notice are much less severe that they would be under the DMCA. C-61 establishes a notice-and-notice system; the only requirements on the person receiving the notice are to pass it on to whoever owns the "electronic location" in question, and to "retain records that will allow the identity of the person to whom the electronic location belongs to be determined". If one ignores the privacy implications of the data retention requirement, it's not a very onerous obligation. Indeed, responding to those obligations could also be automated relatively simply.
So, we can imagine a scenario here where bots are trawling the net, looking for allegedly infringing content. When they find something, they generate a 41.25(2) compliant notice and send it off to another bot, which passes it on down the line, until eventually it reaches a human who can decide whether or not to do anything about it. All of this seems to be permissible under the terms of C-61, and so far it doesn't even seem especially objectionable. (Processing illegitimate notices would have similar undesirable consequences to spam, in terms of the burden they potentially impose on the network, not to mention their nuisance value. But there is a provision to cover this by adding a processing fee via regulation.)
Except that 41.27(2)(f) says the liability shield for Search Providers only applies if the provider "has not received a notice of claimed infringement relating to the work or other subject-matter that complies with subsection 41.25(2)." So, here a hypothetical illegitimate automated notice has a very different consequence. Instead of simply being a piece of spam that can be filtered, or ignored, it is targetable device that can be used to undo the liability shield offered by section 41.27(1). Of course this doesn't create any liability for the search provider--a potential complainant would still have to actually have a real claim in order to obtain any remedies. But if I were, say, Google, I wouldn't be at all happy about the prospect of being flooded with automated 41.25(2) notices, and having to identify the ones that were credible.