How Automation Methodologies and Best Practices Are Transforming the USPTO

0


While automation is not intelligence, the real value of automation, especially software automation, is in removing the bot from the human. The global pandemic has shown how valuable automation has become for businesses and governments to ensure operations run smoothly, bottlenecks are avoided, and repetitive tasks can be removed from the workplace. humans to free them for more valuable tasks.

However, if you want to see the real value of automation, and in particular robotic process automation (RPA), it’s important to know what these bots can and can’t do, and how AI is. applied to help manage more complex tasks. The USPTO uses automation and AI to improve operational efficiency and strengthen its highly qualified review body. In addition, they automate various processes to ease the manual burden on their examiners.

Timothy Goodwin, Deputy Director, Office of Organizational Policy and Governance at the United States Patent and Trademark Office (USPTO) discusses how they are leveraging automation and cognitive technology at the United States Agency for innovation. Timothy will also be making a presentation at an upcoming ATARC CPMAI “Methodologies and Best Practices for a Successful RPA Implementation” event on July 21, 2021 from 2:00 pm to 3:00 pm to explore some of the questions below.

How are you leveraging automation at the USPTO?

Timothée Goodwin: The depth and breadth of automation technologies used within the USPTO are vast. It is an essential catalyst for generating business value. Recently, we have used AI / ML to reduce manual patent classification actions performed by an examiner; RPA to free up valuable time to carry out suspension checks on trademark applications; and virtual data as a service (vDaaS) to increase the quality of developing applications through on-demand provisioning of test data. All this has helped to propel more and more automation capabilities and allows our agency to provide better quality services to the public.

How do you identify which problem (s) to start with for your automation and cognitive technology projects?

Timothée Goodwin: I’ll refine this question and focus on RPA. When we launched our RPA program in 2019, we were looking for any USPTO process that could be used to demonstrate capabilities. It started with a “first in, first out” model where submitted requests only helped a single user or a small number of users. Since then, we have evolved our admissions process to look at automation demand more broadly and identify critical issues impacting USPTO business lines. A recent example has been the development of RPA solutions to help reduce the backlog created by the high volume of trademark applications over the past twelve months.

How do you measure the ROI of these types of automation, advanced AI, and analytics projects?

Timothée Goodwin: Metrics are always based on the business value derived from demonstrated automation capabilities. This can take many different forms depending on the solution implemented. For cloud infrastructure provisioning, it can be something as simple as creating a routine that terminates idle virtual services when not in use, thus avoiding unnecessary expense. For RPA, this could be the number of hours of productivity recovered from one or more automated process instances. The key metric still centers on the question “how does this help to disseminate and grant timely and high quality patents and trademarks?”

What are some of the unique opportunities the public sector has in data and AI?

Timothée Goodwin: In very general terms, the public sector is a steward and has access to vast amounts of very unique data which is also inaccessible to any other entity in the world. This of course, coming from the point of view of the totality and not from the views available through open data platforms. There is immense potential in combining these unique data sets with AI to advance research in every discipline known today. It is simply unlimited. The challenges, on the other hand, are pervasive and transcend legal, technical and ethical boundaries. However, I would like to reiterate our responsibilities as custodians of data and to ensure public trust. For me, this is the fundamental topic that needs to be addressed when determining how the data should be used. Ultimately, the dilemma of how data is used and for what AI purposes needs to be explicitly defined and verified before any legal action is taken to ensure that we exceed public expectations.

How do analytics, automation, and AI work together at the USPTO?

Timothée Goodwin: USPTO data is unique and with that we have unique challenges and opportunities. The three areas are naturally woven together and build on each other to enable advanced capabilities. Automations help feed our patent and trademark data lakes where preparations are made to ensure data quality and security. This in turn feeds into our AI / ML models and is ultimately deployed and provides insights and data visualizations to larger groups. All of this helps create a sustainable environment for making data-driven decisions for the agency and ensuring that the USPTO can continuously deliver high quality services.

What are you doing to develop an AI-ready workforce?

Timothée Goodwin: Developing the workforce in advanced technologies is already a challenge for many federal organizations. At USPTO, we are fortunate to have strong leadership in data science, analytics and AI from Scott Beliveau and our new Director of Emerging Technologies, Jerry Ma. [For additional insights Jerry Ma presented at a previous AI In Government event, and Scott Beliveau will be sharing insights at the October 2021 AI In Government event]. With the support of their teams, they are paving a new way for other USPTO staff members to create opportunities and allow innovation to be explored. Enabling targeted experimentation within AI that provides high business value is one of the best tools we can leverage to grow our workforce. In a more practical sense, we have also increased our workforce through traditional training and have involved many employees in various levels of AI / ML and advanced analytics courses. [The best practices approach to doing AI and big data analytics is the CPMAI methodology, which large organizations are increasingly adopting.]

What AI technologies are you most looking forward to in the coming years?

Timothée Goodwin: I really try to keep an eye on the evolution of AI in cybersecurity research and development. There has already been a great deal of work and success achieved in this area, to the point that any modern AV product uses AI for static analysis and improves trends with dynamic analysis. What interests me most is seeing how AI can “heal” vulnerable or compromised systems in real time. Knowing how vulnerability research is traditionally conducted, there are many opportunities to use AI to prevent the viability of a bug from being exploited. Recognizing and disseminating AI-driven corrective actions before a compromise occurs is what I hope to mature in the years to come.



Source link

Leave A Reply

Your email address will not be published.