Great Client Service Depends On Clarity And Trust, Rather Than Being Constantly Available

Great Client Service Depends On Clarity And Trust, Rather Than Being Constantly Available

Photo credit: JakubD

For those lawyers working remotely during the pandemic, the interruptions of the office have been replaced by those at home: kids, dogs, and flushing toilets.

Along with these distractions, some are experiencing an increased temptation to be “constantly available” for our clients. There are no office hours. Our workstations are laptops and mobile devices. Our workdays theoretically never end.

But good lawyers — whether working from home or elsewhere — do not have to be constantly available to represent their clients well and keep them informed. Here’s why …

Hey: My Review Of Basecamp’s New Email Platform

Hey: My Review Of Basecamp’s New Email Platform

I’m finishing my two-week trial of Basecamp’s new email platform Hey.

I’ve used Basecamp with my legal team for several years and it works great for internal communication, project management, and other collaborative aspects of our work. So of course I needed to try Hey.

So how has Hey worked out?

Independence Day 2020 — Why This Year Is Different

I’m reflecting on this Fourth of July, because this one seems so different than any other I can remember.

Friends and family will not gather.

Parades are cancelled.

This three-day weekend is a cruel irony for the millions rendered unemployed by the coronavirus. Nor will it be enjoyed even by many with jobs, who must work from home with young children tugging at their attention — or along the front lines of our supply chain — worrying about their health and knowing their economic fate may not be so secure.

Factionalism is strong. Social cohesion is weak. We perceive domestic enemies everywhere.

The virus overlays it all. It has revealed our vulnerabilities on many levels — physically, societally, even spiritually.

Then there’s the Black Lives Matters movement and our era’s renewed call for social justice. People are reassessing our history, learning that the dominant narrative is not shared by all. The founders declared American “independence,” though freedom for so many would remain elusive for so long, and in some ways it still does.

We have reconsidered our monuments. Some statutes are simply relics of the worst days of racism. They should go. Yet other sculptures — such as the Freedman’s Memorial — reflect a more nuanced story and perhaps ought to be preserved.

This day is a complicated monument of sorts too. Independence Day commemorates our principled break from tyranny, the courage and genius of our founders, and basic national pride.

But we must also remember how we’ve fallen short of our ideals. So, we should dedicate some effort this day — especially this year — toward revitalizing our struggle toward a more perfect union.

Have a good Fourth of July.

Facial Recognition Software and Social Media

The taboo on facial recognition technology is eroding. And a new app called Clearview is increasingly being used by law enforcement and garnering attention. The New York Times reports:

Police departments have had access to facial recognition tools for almost 20 years, but they have historically been limited to searching government-provided images, such as mug shots and driver’s license photos. In recent years, facial recognition algorithms have improved in accuracy, and companies like Amazon offer products that can create a facial recognition program for any database of images.

Mr. Ton-That wanted to go way beyond that. He began in 2016 by recruiting a couple of engineers. One helped design a program that can automatically collect images of people’s faces from across the internet, such as employment sites, news sites, educational sites, and social networks including Facebook, YouTube, Twitter, Instagram and even Venmo. Representatives of those companies said their policies prohibit such scraping, and Twitter said it explicitly banned use of its data for facial recognition.

Some commentators view this as having crossed an ethical line and we should now ban facial recognition technology altogether.

Of course, the software would never work without the de facto cooperation of social media companies and all of us who freely share our images with them. This particular software apparently works so well because it has a much larger database of images as compared to the FBI and other law enforcement agencies.

There’s an irony of law enforcement solving crimes using what is, arguably, stolen property of those who intended to shared an image with their friends.

Update: Facebook’s own facial recognition technology remains in the courts. Via The Hill:

The Supreme Court on Tuesday declined to take up a high-profile court battle over whether users can sue Facebook for using facial recognition technology on their photos without proper consent.

The high court rejected Facebook's bid to review the case, meaning the social media giant will likely have to face the multibillion-dollar class-action lawsuit over whether it violated an Illinois privacy law.

The case, Facebook vs. Patel, hinges on a question over whether Facebook violated Illinois law when it implemented a photo-tagging feature that recognized users' faces and suggested their names without obtaining adequate consent. Facebook argued to the Supreme Court that the class-action case should not be allowed to proceed because the group of users have not proven that the alleged privacy violation resulted in "real-world harm."

Further Update: The New York Times reports that Facebook to agreed to pay $550 million to settle the facial recognition suit.

Smart Ovens Unpredictably Turning On

It used to be that the power would go out and we’d wake up to blinking LED displays. Now, apparently, we need to worry about appliances accidentally turning on during the night.

Ashley Carman, reporting for the Verge:

At least three smart June Ovens have turned on in the middle of the night and heated up to 400 degrees Fahrenheit or higher. The ovens’ owners aren’t sure why this happened, and June tells The Verge that user error is at fault. The company is planning an update that’ll hopefully remedy the situation and prevent it from happening again, but that change isn’t coming until next month.

One owner’s oven turned on around 2:30AM and broiled at 400 degrees Fahrenheit for hours while he slept, and he only noticed when he woke up four hours later. Nest cam footage captured the exact moment it turned on: the oven illuminates his dark, empty kitchen in a truly Black Mirror-like recording. This owner says his wife baked a pie around 11:30PM the night of the preheating incident, but she turned the oven off once she took the pie out.

June compares the “user error” to “butt dialing,” explaining that owners accidentally turned the ovens on with the app. But the company seems to acknowledge that the design of the oven and the app need to be improved to prevent these kinds of mistakes.

The article mentions that “unattended cooking” accounts for thirty-three percent of home fires. Smoking and candles are also up there. My law partner and I once tried a case where the house burned down and killed the elderly gentleman living there. Our forensic analysis showed the stove had been left on. Most of us would probably have to admit we’ve left the stove or oven on a few times.

But accidentally and remotely turning on our appliances — especially ones that get really hot — is definitely a new thing.

(Via Kottke)

State Bar Opinions On Safeguarding Client Data

AI Falsely Accused Thousands Of Fraud?

Civil rights attorney Jennifer L. Lord represents clients in wrongful termination cases. In 2013, she noticed a common story emerging:

“We were experiencing this rush of people calling us and saying they were told they committed fraud,” Lord said. “We spoke to some of our colleagues who also practice civil rights and employment law and everyone was experiencing this.”

Lord and her team then discovered the Michigan Unemployment Insurance Agency in 2013 purchased the algorithmic decision-making computer called MIDAS, and when the agency did so, it also laid off its fraud detection unit.

“The computer was left to operate by itself,” Lord said. “The Michigan auditor general issued two different reports a year later and found the computer was wrong 93 percent of the time.”

The government accused about 40,000 people of fraud and seized close to $100 million by garnishing their wages or seizing tax refunds. This led Lord and her team to file the class action lawsuit Bauserman v. Unemployment Insurance Agency (subscription required), and the Michigan Supreme Court recently ruled the case may proceed.

According to Lord, it took some digging just to learn that the decision was performed by an algorithm. Recognizing algorithmic — as opposed to human — decisionmaking, has become an increasingly important skill, one of Yuval Noah Hariri’s 21 Lessons for the 21st Century.

This also raises the question of whether machine can and should provide due process. In this case, fraud requires proving intent, involving the subtle and inferential determination of what people were thinking under the circumstances. Are algorithms currently up to the task? How soon before they are?

It also raises the question of how algorithms should be used to make law enforcement decisions more generally. With facial recognition and surveillance data increasingly available, using algorithms to make decisions becomes more attractive. Just ask Chinese authorities.

Finally, this reminds me of the situation in Baltimore, where a cash-strapped municipality finds itself overwhelmed with technological advancement. For Baltimore, it was the inability to fight back against malware. Here, the state may have been tempted by what seemed like a good technological substitute for an expensive government function.

For now at least, significant government decisions still need to be made by humans. As algorithms are incorporated into government decisions, the process must be disclosed and made as transparent as possible. And critically, there must be a way to appeal the decision to a human.

Robot Floor Cleaners At Walmart

Walmart attributes part of its second-quarter results to investments in robot floor cleaners:

“Smart assistants have huge potential to make busy stores run more smoothly, so Walmart has been pioneering new technologies to minimize the time an associate spends on the more mundane and repetitive tasks like cleaning floors or checking inventory on a shelf,” said Elizabeth Walker, from Walmart corporate affairs.

Viewed most positively, this is perhaps another example of Multiplicity, with the division of labor between humans and machines based on what they do best.

New Efforts At Autonomous Vehicle Legislation

In 2017, the U.S. House of Representatives passed the SELF DRIVE Act governing autonomous vehicles, but it stalled in the Senate. Last year, lawmakers failed to pass a bill before the December recess.

Today, the House Energy and Commerce Committee and the Senate Commerce, Science, and Transportation Committee announced work on a “bipartisan and bicameral basis to develop a self-driving car bill.” They request input from automakers, safety groups, and other stakeholders before August 23. David Shepardson, reporting for Reuters:

“Right now various countries are exploring regulations that will shape the future of autonomous vehicles, and the U.S. risks losing its leadership in this life-saving, life-changing technology, so we urge Congress to move forward now, this year,” spokeswoman [for the Alliance of Automobile Manufacturers] Gloria Bergquist said.

Since the new bill is being written with input from both chambers, this version stands a better chance of avoiding a breakdown like last year.

How Self-Driving Cars Visualize The World

Chris Umson is the Director of Self-Driving Cars at Google.  In this TED talk, he demonstrates what the world looks like from the perspective of a self-driving car.  My first reaction is to notice how complicated and dynamic even an ordinary driving scenario is.

Since watching this talk, I've become more conscious of how I perceive and process road conditions, both when driving and biking.  I've noticed myself looking farther ahead and slowing down in ambiguous situations -- where it's unclear, based on the available data, what I should do.