Webinar: User Feedback and Modernization (March 28, 2019)

>>…to help inform user
interface designs and to ensure that the result system is
user friendly and intuitive. Our modernization effort has two key components. Replacing legacy information technology
systems with a high performing and integrative IP enterprise system
that supports registration, recordation, access to public record including
historical records, information services and other public services. And conduct and copyright business
processes review and organizational analysis to ensure continuing alignment
with office of strategic goals. We are very excited about the path ahead
and thank you for your participation.>>Our first presenter is Natalie Buda
Smith, Chief of Design at the Library of Congress’ Office of the
Chief Information Officer.>>Thank you. Part of the copyright modernization effort
includes a focus on user centered design, designing services for the needs and
wants of actual users of the system. User Centered Design and Copyright
Modernization includes three principles. User experience design, accessibility
and usability. I’ll walk through these three
principles and then afterwards, T.J. Willis will show actual ways that
these principles are being applied to current concepts for registration. On this slide that you’re — of a
[inaudible] user experience design. User experience design refers to a person’s
understanding, emotions and attitudes when using a particular website,
online product or system. It includes a practical,
experiential, affective, meaningful and valuable aspects of human-computer
interactions. Within copyright modernization,
in the Agile Project Teams, there are user experience designers that
represent the needs and wants of actual users. Working with stakeholders, product owners,
developers and testers, to influence decisions that meet those needs and
those wants by actual users. User experience designers interact
with these users through user research and usability testing and communicate their
needs through journey maps, wireframes, visual designs design systems,
front end code and more. On the next slide, we’ll take
a look at some examples of some of the user experience design work products. Here you’re seeing system diagrams
to communicate how the user flows through a service, wireframes that
specify features and functions on screens, visual designs to create hierarchies and
insist with ease of use and a screenshot from a very popular design system, the U.S.
Web Design System, to represent design systems that we’re creating for copyright modernization
that communicate the patterns in all the systems and specifications, to ensure that there’s
consistency across, not only the application, but all online services and
copyright modernization. On the next slide, I’ll talk about the
second design principle of accessibility. Accessibility is the design
of products, devices, services or environments for
people with disabilities. Accessible design and development ensure
both direct access, which is unassisted and indirect access, meaning compatibility
with a person’s assistive technologies. In the copyright modernization efforts, we apply accessibility best practices
throughout the software development life cycle, to address Section 508 and WCAG
requirements and standards. Accessibility is not the same as usability,
which aims for effectiveness, efficiency and satisfaction in an online service. On the next slide, I’ll talk about the
last principle of usability testing, that’s a part of making and
ensuring usability of the systems. We conduct user research and copyright
modernization through outreach, surveys and interviews, with a range
of users, from novice to experts. Usability testing is a formal testing
process with actual users that is used in Agile Development projects, to understand if the online services are meeting the
actual user’s objective is complete, is easy to use and understandable. After an [inaudible] release, usability testing
is conducted using user centered scenarios, with actual users of the system. This means invalidating assumptions,
measuring ease of use and identifying issues with
the released product. Any issues that are identified are
entered into a product backlog, so that it can be prioritized by an actual team. The value of usability testing is that it’s
conducted with users that need to use the system and want to use it without obstacles. But that actual tester is removed from the
day to day decisions of the project team. Well, that’s my introduction to the
three principles of user centered design that we’re using in copyright modernization. I’ll turn it back. Thank you.>>Our next presenter is Thomas Willis, Project Manager in the Copyright
Office’s Registration Program.>>Thank you. So what Natalie has shown us so far, with this concept of user centered
design has really been a core principle that we’ve been using since
the start of our effort to re-imagine this new registration
application system. And this is a crucial difference between
how we’re approaching development this time and how we approached it when we first switched from a completely paper based
system, to an electronic one in 2008. There were tests back then to
include users in the testing phases, but that was really towards
the end of the process. And at that point, it was really
difficult to make wholesale changes. So contrast that with how we’re approaching this
now, where we started this process in October of 2007 — or I’m sorry, 2017,
with a large scale outreach effort, where we asked the users what they
needed and wanted to see in a new system. And this was before a single
line of code was written. So the image you’re seeing right now, it shows a wall where we posted
thousands of pieces of that feedback. We’ve organized the responses by theme
and these were instrumental in coming up with the initial concept
screens you’re about to see. And during that process, we hosted in person
interviews in several different cities, gathered comments from the
public and analyzed thousands of survey results gathered
from the current ecosystem. We also considered informal feedback, gathered
over the years from various sources, from staff, applicants, industry, outside observers. I was actually formerly an
examiner for many years. And through that experience, I know I would
regularly receive feedback from applicants on frustrations they have with the current
system and what they want to see in a new one. So this is also an important feedback
mechanism, which is through our staff, who work in the same systems as
our applicants and share many of those same frustrations with it. So while I say the outreach effort
formally started towards the end of 2017, the reality is that we’ve been
gathering this feedback for many years. And all this information is helping to shape how
the new registration application system’s going to work. So for those that have used eCO
before, here’s a screenshot of one of the registration application
pages for a standard literary work. Now, there are currently 31 application
forms that are in use or soon will be, within the U.S. Copyright Office and we’ll
need to address all of them in a new system. And whatever we build must be able to
adapt quickly to changing office needs, regulations and feedback from users. The basic structure behind eCO — next to the
paper applications, where the different spaces in the application were fit into this electronic
system, really without much consideration given to whether or not that’s the
best way for users to interact. A major focus in re-imagining a new
system is to figure out the best path for users to enter required information. So we’re not necessarily limiting
ourselves to keeping the same format, just because that’s the way
we’ve always done it. So here’s the log in page
for our current system. Its purpose is to serve as a welcome
screen and to show important news, such as planned system outages and
important changes or updates. And users have told us that when they’re
confronted with this wall of text and this tight layout, it’s cumbersome to read. And many just log in without
looking at it at all. So this brings us to some major themes
that we’ve been hearing from users, about what they want in a re-imagined system. They want a comfortable interface
that doesn’t overwhelm them. They want to navigate easily between
screens and throughout the system. If they need help, they want to easily
access targeted information based on where they are in the process. They want system validations in order
to ensure accuracy of certain fields and to limit potential Copyright
Office correspondence. And if there is correspondence that’s
required between the applicant and the Office, this needs to be done in an efficient manner. So contrast this with a re-imagined
log in screen. So here we see a conceptual design that gives the user a completely
different initial experience. You can see this is far less
intimidating than the current log in screen and the user isn’t overwhelmed
with too much information. You’re told upfront what to
expect before you even log in. And the left side of the banner
could contain information targeted to different types of users. So for this example, it’s a new user. They want to submit a registration
application probably for the first time. But you can imagine that if this were
a power user or someone that wants to use another Copyright Office service,
such as recordation of a document, we can give them information
that’s important to them, based off preferences they choose
within their account settings. Now, before I go too far into
my portion of the presentation, I need to tell you a bit more
about what you’re seeing here. These aren’t screenshots of an actual product. They’re essentially high resolution
wireframes that we’re using to test different concepts and features. And we tested these initial designs
by videotaping sessions with users. And they were asked to go through different
scenarios, using a clickable presentation. And we asked them to talk
through what they were thinking. We could actually see their mouse movements. We saw how they interacted and
where there were pain points. And it showed us what worked and what didn’t. And we’ve been iterating on
designs based on these sessions. We haven’t started on actual development of
the registration products yet, but when we do, towards the end of this fiscal year, there’s
going to be a lot of design changes as we react to the latest feedback, prioritize
feature set and build new products with the partners of the Library. So this is all to say there
may be significant changes to what you’re seeing today
versus what actually goes live. And we’ve got a lot of work to do between
these concepts and a working product. So moving forward with our design concepts,
if you forget your password on a log in page, you’re brought to a couple self-service screens that will allow the user
to reset their password. And here I’d also like to point out
how clean the overall interface looks. So users have reacted very positively
towards this type of design and have stated that it’s very comfortable to use. And here’s the next step in
that password reset process. And here, we’re illustrating a feature
that users have consistently asked for, which is self-service options in general. So right now in our current
system, it’s really rudimentary, in terms of their self-service capabilities. So in order to reset a password, in order to
do certain things, it’s a very manual process. Oftentimes you have to call
or you have to send an email. So we’re trying to come up with
features that are more self-service, so these can be done independent
of human intervention. Here, we’re seeing a concept that we’d like
to explore that involves notifications. Currently, we rely on direct
email to give users information. And this isn’t the most reliable way to
communicate, because email can be ignored or in some cases, it could
be routed to spam folders. So we’re envisioning a message center where users will receive notification via any
vehicle they choose, so that can be an email, a text, a recorded phone call,
any combination of those. And then once notified, if the user needs
to respond, like in a case of a problem with their application, the
user would log into the system at their leisure and handle it from there. So taking that concept a step further, users want to choose what kind
of notifications they receive. So these could be related to the
application that a user is submitting. It could be a status update on that application. It might be a notification
of a regulatory change that affects a particular
type of creator or industry. So there could be all kinds of useful
information that could be included here and we’re still kind of trying to figure
out what might be good things to offer here. Organizations have asked for ability
to manage their accounts and employees. They have different needs, as
opposed to individual users. And these entities have asked for the
ability to do things such as transfer work from one employee to another, review
applications that have been completed by employees before they submit it to the
Copyright Office, manage accounting and payment across multiple employees, set
permissions for what employees can do. So at this point, we’re also still looking at
what kind of capabilities we’ll be able to offer and remember, this is — it’s still
early in the stages of design. So we’re still gathering feedback on
what we’re going to be doing here. So on this screen, we are assuming
a new user has set up their account but hasn’t submitted any applications
and they’re completely new to the system. So in this case, many users would
like some help in orienting themselves to different features within the system. They don’t want to feel like they’re
just being dropped in the middle of this overwhelming environment. They need to understand the basics of that environment that’s surrounding
them so they know what to do next. This can be toggled on or off or
it could be skipped altogether, depending on user preference. But if they do choose to take a tour,
they’re going to get helpful information, as to what’s going on in
the screen in front of them. So here the user is being shown a concept,
where there’s a pending review category, where the Office has received
all the required elements for the application and its
awaiting examination. So there’s a lot to explore
on this screen and again, initial versions of the system may not
look exactly like this, but it shows a lot of the features that users are asking for. And here we’re assuming that the user
has submitted multiple applications, which are in various states. Now, they need to manage these
applications, so they want a dashboard that helps them see their cases at a glance. They want to have as much
information about their case as possible, without being overwhelming. So this is one way we’re exploring doing that, where each of the cases shown has vital
information, such as date submitted by user, what you need to do to complete your
application and how much time you have to do it before the case is
automatically closed, due to inactivity. Status or disposition of the case. If correspondence or other action is required, an action button that directs
them efficiently to that area. And they want to see information about the
copy they’ve submitted for examination. So in the case of the Kennedy curse, you see
that there’s an attachment session with a link that allows the user to see those files without
having to actually drill down into the case. Looking at the top of the
screen, under the title banner, you see a row of tabs that
group cases by status. So if there’s a draft of an application
that the user has not submitted yet, these could be grouped all together. Same with those that have been submitted and are
awaiting examination by Copyright Office staff. There could be any number of ways to
slice and dice by case information. Under that field sorter, you’ll notice
the coral colored banner at the top, stating that two applications
require your attention. This is a way to notify the user of
items in their dashboard that they need to address before the examination
process can continue. And to the left of those tabs, there’s
an option to start a new application and you also have access to templates and there
might be an option to show you the history of everything you’ve ever submitted. There are also opportunities to
include links, to help resources and other Copyright Office resources. Here, we’re illustrating another concept
that users have consistently asked for. This is the ability to save information
and access it within the application, eliminating duplication of effort within
an application or across multiple ones. In this case, there’s an author who’s entered
information that’s been saved previously and users just simply choosing
from a dropdown menu. For examination copies that can be submitted
digitally, users want to simply drag and drop and they want to feel confident that they
have successfully uploaded their correct work. They asked for progress indicators
and confirmation of the file names that they’re sending. So this view shows how we can incorporate
all this into one easy to read screen. Taking it one step further, there are
certain files that contain metadata that might be able to prepopulate
the title field. So this could potentially cut down on
the need to enter titles in manually. Here, we’ve taken the titles from the drag
and drop screen that you saw previously and prepopulated the titles,
which of course would be editable, if the users chooses to change them. Now, users have asked for more
help in navigating the system and understanding what’s being asked of them. So instead of confronting the user with walls
of text, like you see in the current system, we want to provide multiple
layers of help when needed. Here we see some hover text built into a
small informational icon that’s available in each section. This would give the user a small amount of
information that might answer their question. If not, you can see there’s a
large icon in the top right corner with a question mark and help title. When you click on that, the
next level of help appears. This would bring up help topics that are related
to the field that you’re currently working on. The idea here is that we want to
provide encouragement to the user, to explore these different help topics without
dumping a lot of information on them at once. And this next screen just shows an expanded
view of what that help might look like. And of course, if the user wants to have
access to the regulations or compendium, as an official resource, you could
navigate to that from here as well. So here’s an interesting way that we’re
exploring for handling correspondence. In many cases, the Office will still need
to send long form letters or messages, where the examiner has to explain and the user
has to respond to an issue within their case. Part of the challenge of this is
simply explaining where exactly in the application there’s a problem. So for certain types of relatively easy
correspondence, it might be better just to have the examiner comment
directly next to the field in question and the user could also reply right there. In this concept, you can see at the
top, where there’s a navigation bar, with checkmarks indicating those fields
have been examined without questions. But three — the work details and two others
have areas where we need the user to respond and you see little checkmarks there — or I’m sorry, little flags there that say
that you need to do something with this. And finally, users have consistently
asked for the ability to have a summary of all the information they’ve entered into the
application and an easy way to navigate back to individual sections to make corrections. So here, we’re illustrating
that concept of a review screen, where you could preview the
certificate before submission. So at this point, I’m running out of
time for my portion of the webinar, but I’d just like to close by saying
that it’s really an exciting time here at the Copyright Office, as we
look to modernize these systems. We’re going to be involved in a continuous
feedback loop and at various times, we’re going to be reaching out to
users to help us with that process. So stay tuned for those announcements and please
help us develop a system that works for you. Thank you.>>Our final presenter is Tapan
Das, Analysis Section Head at the Copyright Modernization Office.>>Okay. Thank you Ananda. Hello, everybody. You have learned from our previous speakers
regarding the user experience conceptual design and usability testing for the U.S.
Copyright Office Modernization Efforts. Now, I will explain the internal acceptance
testing performed during the system development. Will you buy a house without
doing an inspection of the house? Will you buy a car without a test drive? Of course not. In fact, you will do several
inspections before you buy. Internal acceptance testing is a particular
tool to confirm that the system is developed, as per the user requirements
and meets their expectations. Here are the topics that I will discuss. Before going into the details
of internal acceptance testing, I will explain about general system testing
and then I will discuss on what, when, how, who and about that internal acceptance testing. System testing is a process
of validating and verifying that the new developed system meets the
business and technical requirements. Business requirements are the
particular activities of an enterprise that must be performed to meet
the organizational objectives. Technical requirements [inaudible] to the
technical aspects the system must fulfill, such as performance related issues, the
[inaudible] cases, availability and security. The second one is works as
expected by the users. For example, government community
actual users like you ensuring that the system is functioning
as for the business equipment. On the right, you see a typical testing lab
site, which is divided into five phases. In the first phase, test plan is developed. If this plan is a document,
describing software, testing scope. After this is the sources and schedule. It is the basis for formally testing
any software product in a project. The second phase is design
and during this phase, it describes how the testing
is — testing should be done. The hard phase in the test execution,
it is the process of executing the code and comparing the expected and actual results. As part of the exit criteria from the test
execution, we ensure that the defects are logged in the defects tracking system and defects
are analyzed for further resolution. In the test reporting phase, tests metrics
are generated, based on the test results and distributed to the project team. Here are some of the major reasons for
testing in the system or software application. It is important to ensure that the
application does not fail while running. It can be very expensive to fix
a system after it has gone live. If your defected system goes live, it impacts
the plan goal and any dependent activities. It’s necessary that the high quality
system is delivered to the end users for improved user satisfaction,
while using the system. There are two different types of
testing: functional and non-functional. Functional testing is a type of testing
which verifies that each function of the software application operates in
conformance with the requirement specification. Non-functional testing is a type of testing but
check non-functional aspects, like performance, usability, reliability of this
system, of the software application. Functional testing are usually done
through various types of testing, which are like unit testing,
integration testing, system testing and lastly, the acceptance testing. In the testing is a level of software testing, where individual units are
components of the software I tested. In the integration testing, it
is a level of software testing where individual units are
combined and tested as a group. System testing is a level of
software testing, where a complete and integrated software is tested. And acceptance testing is a formal testing,
with [inaudible] the needs and requirements. It is conducted to determine whether or not
the system satisfies the acceptance criteria in intervals the user to determine
whether or not to accept the system. There are several different
types of non-functional testing. For example, performance testing,
security, usability and compatibility. Performance testing is in general
a testing practice performed to determine how a system
performs, in terms of responsiveness and stability under a particular work load. Usability testing is a way to
see how easy to use the system. And security testing is a type of software
testing that intends to uncover reliabilities of the system and determine that its data and
resources protected from possible intruders. Compatibility testing is a type of software
testing to check whether a new system is capable of running our different hardware
operating systems, applications, network performance on mobile devices. As I explained in the previous slide, acceptance testing is a formal testing
regarding the user needs and requirements. In the acceptance testing, actual users surround
their day to day operations to make sure that there are no defects or errors. At the end of acceptance testing, it is
determined if the users will accept the system or need further fix to dissolve
any outstanding issues. System development and testing of the system against utility requirements must be completed
satisfactorily before the user acceptance testing can start. Acceptance testing is performed once the
development of the feature function — feature or function is fully
developed and tested by the internal testing,
[inaudible] satisfactorily. Acceptance testing is not a place
to catch most of the defects. Acceptance testing is the last phase before
the delivered functionality goes live. Here are two different types of — different approaches, depending on
the type of software development. In the waterfall approach,
as the system is developed, different types of testing is done, in sequence. As you can see, acceptance testing is done
at the end of intersystem development. The development team needs to wait until the
end to get benefit back from the business users. Whereas in the Agile, developmented
software is developed in small increments. It consists of one or more iterations. At the end of each iteration, a set of
functional features are developed and released. Acceptance testing is performed
after each iteration. And since acceptance testing is done much early
in the development cycle, developers get all of it back on, on any functional
or system issues. There are a number of activities
involved in the acceptance testing. At the U.S. government office, we follow a 10
step process to conduct the acceptance testing. First we identify the testing team, consisting
of the product owners, testing coordinators, business acceptance testers,
including subject matter experts. And the Office of Chief Information
Office and Development and Testing Things. The next is plan acceptance testing schedule. In this, we developed the testing
timeline for key activities and share it with the acceptance testing team. Next, determine the user stories
that will go into the test team. A user story describes a business function
or feature from an end user perspective. It describe the type of user,
what they want and why. Next is, we ensure that the test cases
are developed based on the user stories that are part of acceptance testing. In this case, define the set of steps to
test the delivered functionality along with test data imports and expected output. Next, we review the test cases
and incorporate any feedback. Following that, we prepared for our test data. Next, we ensured the test environment is
ready, including loading of the test data. The new code is deployed and some more testing
is done on the new features that were deployed. In the established tracking
method, the different log in and test [inaudible] process are established. Next we do the kickoff meeting, to
start the acceptance testing execution. During the checkpoint meetings, we
monitored the progress of the testing. And during the defects triage meetings,
we prioritized defects for possible fix of the diverse that would [inaudible]. At the end of test, coordinators will compile
test results and provide recommendation for system acceptance to the product owner. Acceptance testing is a collaborative
effort, conducted by four main roles. They are test coordinators,
product owners, business users and testers and the OCIO development teams. Test coordinators from the
Copyright Modernization Office. Coordinators acceptance activities
across all the teams. They provide training and guidance to the
business users on how to write test cases, perform test execution and log defects. At the end of testing, the generate
test metrics, provide recommendation for system acceptance to the product owner. Product owner is a member of this ground
team, with the responsibility of managing and prioritizing business
stories in the back log. Product owners prioritizes
in a different setting than the [inaudible] meetings
for [inaudible] future sprints. Product owners finally provides
assistance accepts institution, based on the expected test results. Business users are the users within the expected
users from divisions who are responsible for managing or using the system. They perform test execution and log
any defects found during the testing into the defect tracking system. And OCIO development team is
responsible for accurately developing and mentoring the technology solutions
for the U.S. Copyright Office. They provide stuff, all for setting up
test environment and load this data. In the next slide, what will
happen if the system is deployed without allowing users to
perform acceptance testing? Here are some of the challenges, like
business requirements may not be met. There could be high risk of system failure. There may not be any opportunity
to identify new features. Users may end of with poor user experiences. A development system will increase
ongoing maintenance costs and the day to day business operations could be impacted,
due to poorly performing applications. Now, I will hand over to Ananda for the next section.>>We will now begin the question
and answer portion of the program. You can submit questions using the Q&A
panel, to the right of the Webex screen. Our first question for Tapan, what are projects
you are currently conducting user acceptance testing for?>>We are currently conducting
acceptance testing on the recordation, modernization system, which
is under development. And while conduct acts of testing, after
— we are contacting after each sprint. And we plan to continue to conduct exit testing
for any system during the development cycle and before it goes to production.>>This next question is for T.J. Are these new
design concepts going to change along the way?>>Yes, absolutely. So I referred in my presentation probably
several times to that effect, that even the ones that you’re looking at right now,
some of those have actually changed since we did the initial round of user testing. So I do expect that there’s going to be
quite a bit of change, as we go through. And this concept of user testing —
I’ve got to say that when we did this, it was a surprise to me,
because again, I’m an examiner. I’ve done this for many years
and I thought that I knew most of what the issues would be
that users have with this. And I thought that we had really nailed and been
successful with some of the designs that we came up with, “Hey, this is going
to be a real winner.” And then when they went out for testing
and folks that hadn’t been involved in the development process started to test it, we actually found that some
of these were failures. So based on that, we’ve gone back to the
drawing board and we’re redesigning some of these, really on a daily basis.>>Next question is for Ricardo. Will the new system provide
for API’s, so the volume users and agencies can automate input invalidation
of copyright registration applications?>>Thank you, Ananda. Yes, the system will provide API’s. We’re working with the Office
of the Chief Information Officer to develop the standardized approach of
that technology and how to move forward. The answer is yes.>>Next question is also for Ricardo. The questioner asks, “Can I
volunteer to be a user tester?”>>I believe with the recordation modernization
project, we’re going to be sending requests for people to actually test certain of
the functions as they become available. I will make sure, that if you leave us
your email and your contact information, I will provide that to our product
owners to make sure, you know, they can reach out to you, if interested.>>Another question for Ricardo. Is there a tentative timeline for the project?>>So currently the recordation
modernization will be the pilot. The approval process will be a release in 2020. And then there will be, after obtaining
feedback from staff and the testers, there will be a release for
— first release in 2021. As T.J. stated, registration has to start it. We’re getting ready to start in this year. But we will be providing soon a
roadmap, where there’s actually — we’ll state the actual events that are going
to be incurring, in terms of the modernization. So stay tuned for that.>>Next question is for Karen. Did you learn any best practices from
the U.S. PTO team working on TEAS?>>Thank you, Ananda. That’s a great question. We are trying to learn from
basically every system out there, in terms of things that we should do and not do. So as T.J. mentioned earlier, we’re
actually learning from our roll out, an initial system that we had in eCO, in
terms of things that we want to approve on. And we also are looking to our sister agency,
USPTO., who’s also done modernization efforts, to ensure that, if there are things that we can
take from them, either positive things or things that are kind of lessons
learned, that we would be able to incorporate that in our development as well. So we’ve already been out to PTO more than
once to discuss things with their IT teams, and I expect that that will continue as we
move further into the development phase.>>Next question. Do we have Natalie? The next question is for Natalie. Does the Library have trained user
experience professionals on staff to assist with the moderating management and incorporation
of actual user experience feedback?>>Thanks. Yes, we do. We have a team of nine that are very seasoned
user experience design professionals. Most of us have been in the D.C. area,
working for private and government agencies for at least an average — about 20 years, so we do have very seasoned user
experience designers on staff. And we also supplement with contract staff
and I, myself, have been in the industry. I’ve been a user experience designer for —
I’m embarrassed to say — about 30 years now. So we keep up with industry best practices. We came up with industry tools. So that is something that is
of importance to the Library. And if you’ve seen the recent Library strategic
plan, being user centered is a quality that all of us are achieving and trying
to achieve in everything we do. And so we take user experience design seriously. It’s also fun. But we do make it an important part
of everything we do at the Library, including design and development
of IT and visual products.>>The next question is for Karen. Will modernization efforts extend to the
copyright registration records catalogue search? If so, will deposit or work images be
available through the catalogue search?>>Thank you. That’s a great question. I think that’s one of the lessons learned. Actually when we did our initial eCO system, we really did just focus
exclusively on registration. So taking that paper based registration system
and making it an electronic registration system. What we are trying to do now is
really look holistically about — at all of the office services, so we’d like to call the system development
the Enterprise Copyright System to really reflect the fact that we want
to incorporate recordation, which we, as you heard today, are also doing, as
well as registration and public catalogue. So we expect that we will have multiple work
streams, working on registration, recordation, public catalogue, all at the same time. And we will make sure that we
incorporate various work streams that will allow us to coordinate. So if there is one thing that will help
recordation, we’ll also be able to apply that new development aspect to
registration or to public catalogue. With respect to search of images, honestly
there are a lot of issues that we would have to look at, as to how we
might be able to do that. But that certainly is something
that we are considering. There’s a lot of really good image
search technology already out there and we really do want to be able
to take advantage of new technology so that our customers are able to
really utilize our services in ways that they hadn’t been able to do so before.>>And the final question for today, for
Ricardo, if there’s a general question about Copyright Office modernization,
do you have an email contact?>>Yes. The email that —
this [email protected] We monitor that email daily, so if you have
any questions, any comments or suggestions, please send it to that email address. I believe we will show the
email in the presentation or we could provide it to you on here. But it’s [email protected]>>That concludes our program for today. Thank you very much for joining
us and please remember to join us for our next webinar, scheduled
for the end of May.

Leave a Reply

Your email address will not be published. Required fields are marked *