Joe Padfield 0:00 I See. Hopefully you can see my presentation. So it can let me know, I can. Okay. Hi again, as Joe Padfield for the National Gallery going to carry on now, by talking about reusing IIIF resources in documentation solutions. Okay, so there's a growing number of institutions, presenting images to the public via IIIF and the IIIF sandard is designed to allow your image t to be stored once but used in multiple different places. This is the idea. Ao if IIIF systems have been developed for public engagement and dissemination, they can also be used for internal use and documentation, and both of these use cases can help support each other moving forward to justification of the efforts now, I'm going to look at an old going on to now use case of this within the National Gallery in London. To give you an example so probably had a number of years ago within the National Gallery and I think this is not a unique problem to us in an issue before was that we had a wide range of resources, stored in lots of different places. And we wanted to look at how we could work with and share all of our own images and data. This work is still ongoing. It's It's It's lots of things to do. So at the time we had a key range of a range of key resources, held and managed by different departments, and this is the problem. So we had text-based resources in the form of collection management systems, collection image databases, written reports, and notebooks, and lots of images of different types, held in, as I said, different departments. Now, this was a classic silo problem, so we've got a large number of images and data from small to very large stored securely within departmental folders with limited cross departmental access achieved via duplicating files, so we've got a storage problem we've got a file space problem, and we've got just got access limitation problems. So, what we want to do is to look at building an internal, web-based system to open up these resources to all departments and provide a dynamic, up to date streaming access to all of our image resources so that the people within the National Gallery could just get on with their work. We need to cope with very small to very large scale images. Now this was quite a few years ago. But this is effectively the solution we came up with. So it was creating an on the fly dynamic, digital asset management system that basically looked at all of the images that were available in a defined set folders and processed them and made them available through an internal API through IIIF. Now that was done by combining information from collections image databases, folder names of file names, and actually extracting EXIF data and metadata from the files themselves. Now the way this works is you have the internal API, you can then present them as an image presentation tool, which we did with IIIF after using Mirador, and then that can be used to carry out new collections material research, which then adds more digital resources in, and you have a continuing circle, so we can support our own work and continue working this way. Now, this is sort of a quick screenshot, so all the collection images including painting samples are automatically added to the system every night, and the National Gallery is not a huge collection that the main collection is sort of 2500 paintings or so. But we have 10s of 1000s of very good quality images in here, I think, in total, we're looking at sort of 60,000 - 70,000 images directly related to the paintings and samples. Now, all staff have access to these, no matter what the size, whether they're small thumbnails or whether they're huge gigapixel type images. They're available for people to explore. A subset of these images are now actually being presented externally, the research portals and the full front visible images are being pushed out to the public NG website. Now, this workflow is quite bespoke, and the process has worked well for a number of years, but it's quite complex and difficult to support maintain by yours truly. So the current developments, what we're working on at the moment, is actually integrating commercially supported middleware solution that can manage the data aggregation and presentation by persistent identifier management, Image presentation preparation, generation of IIIF manifests the actual IIIF presentation image server all hardwired in. The public output of this beta API can be seen at the link to data.ng-london.org.uk. It is beta, we are changing it. There's not an awful lot of help for IIIF stuff at the moment I should say. But we're putting more and more examples on there at the moment, but it does include access to full front visible image of each painting to the collection so far, and we're working on extending that. Now, working with your own data in, in, any sort of in a given institution so we have access to all of this digital information now we're starting to make more of it available to the outside world. But in addition to the paintings collections as you will have seen from that image that was on the previous slide, we have a large number of painting samples 1000s of them. These are very small, sort of pinhead samples, that are taken to look at the material structure of paintings. Now these image samples are also in the IIIF Viewing System. So the plan was, then we use this data and these images to create a more efficient way of documenting and connecting the paintings to the samples, and the reports and documentation that we produce about the work that we're doing. So, a digital sampling point system was built. This was a new set of tools we put together to allow us to describe and record where samples came from, and where on the painting, they came from. And this work was supported by an EU horizon 2020 project called Hyperion-CH. Now, this is how historically we recorded sample documentation, we have sample books written books sometimes with little diagrams in there you can see a little drawing, let's say, a sample was taken from the top left hand corner just underneath the tree and the flower, and it's a written description of this work so there are lots of these books. So it's actually quite difficult to search. Obviously you need to know what you're looking for. So what we're trying to do is to make that more accessible, and within, going along with those sample books, were often photographs or laser printouts, which are not tremendously archival where someone would take a pen and actually mark the sample site on an a4 page. This is quite good if it's in relation to a very good text description, and you actually still have the person working in the gallery who took the sample and can remember where it came from, but it's it's quite, we could do an awful lot more with this and actually have a better presentation of this data. So the aim was to replace the existing analog process with a more flexible and efficient, efficient digital solution and construct an easy to use digital sampling point system. Okay, so we want to record the sample site location the data the names people who did it, the reasons the purposes and the descriptions, as well as an accurate position on a good quality image. And this was made possible by having very good robust naming protocol for sample images, and paintings. So, this was the slightly more complex system that we did, so we're pulling in painting images directly from our system, the sample images which are in a nice image format and collection information from TMS. So that was combined with existing documentation and the sampling books with the internal internal API, and an LDAP server for authentication on the database structures. And then we created the digital sampling point system. So, what you have here is a Mirador 2, In this case at the moment we still have to update that, based system combined with a IIIF annotation server. So a standard single image is selected for a given painting, and you can zoom in and drop little sample points on the painting. These are then connected directly to more rich documentation, including images have the samples automatically by the database. So what happens is you have that single image. This can be changed and updated if required, but the migration of all sample sites is a little semi-automatic it's a bit more complex. But effectively what happens because we have a robust naming protocol is that when you put a sample point on the painting and zoom in and see in the annotation, this is, IS2 for inorganic sample two, when you save that the system will automatically go off and find you all sample images that have already been taken of that sample for that painting, And any documentation we already have from the EXIF data and the descriptions and pre-populate the website based on that, so we don't have to redo the work that's already been done, that can be automatic. So the system has a secure LDAP login. Simple Search Engine find the paintings, Mirador viewer connected with Web Forms, can create simple new sites, move them, edit them, and add multiple samples to the same site, and delete them and move on. So, we also have an automated reporting system which I will mention in a second. Now, here are a few screenshots, to give you an idea of the system. So, we have the Mirador viewer on the left, and the samples for a given painting. This is the sort of image number, which is listed here at the top. You can zoom in and use Mirador to adjust the brightness and contrast of the image to make it easier to find exactly where a sample came from. You can drop a sample annotation on this case just give IS5, and as I said when you save that, it will go off and find sample images from the database, the forms that allow you to add in sampling dates, rich text content to do with the sample site description. The reason for sampling and any additional comments that came along. And you can also associate multiple images potentially visible or UV samples from that particular site, so it's designed to document that sampling process. Now, in addition to this Mirador allows you to have multiple viewers, so the system allows you then to drop in manifests for the samples, so you can actually then look at the painting and the sample, or multiple samples at the same time, and fullscreen it and zoom in and actually carry out r eal work on the samples as well as just the documentation process so it's associated all of this data together to allow people just to get on with the work. The other thing that was requested that took a little while to do, is that if you do actually need to output flat images of your sample sites, including for publications, or in PDFs or reports. It's, it automatically takes the sample sites, the annotated positions and creates a new flat image with the sample numbers and the dots, painted or drawn onto the image so that flat image can be printed. These image details are also created to make it easier to associate the text, and this generates a full report that can then be just printed out or or used to copy and paste the details into other publications, we have still been discussing the exact formatting of this and we may look at sort of coming up with a much more rigorous format that then can be extended for further documentation to be added in when analysis or results that are carried out, that took a little while to get the images being painted the sample points being painted on the images but it works quite well now. And I was told how much time people spend trying to do that for publications and they were extremely pleased that it managed to save them all of that time with the system. Now, the data entry system. The simple reports are complete, but further work is required to finalize what information is included. The automatic reports can be better and more formatted we'll see. We'd like to update to Mirador 3 obviously, which is now available, and that will change some integration systems. The system is designed to only document the sampling process. That's the point. But the examination reports and sample layer descriptions are currently produced as Word documents, this could be extended to encompass more of that and it could be extended to associate in analytical work that goes on with this. So, we're still developing it. At this time the system is still quite wedded to the internal National Gallery systems but as I said we are working for a more standardized internal API very soon, in which case I do intend to pull the whole thing out, and drop the code into GitHub in cases of use, but we're aiming to use those standards whenever possible to simplify the process and make it easier to use the same thing to annotate, anything else. It was just an example of what you can put together. This is internal, obviously, unfortunately. But we would very much like to make as much of a public as possible, and then eventually when we start to have more digitally presented, interactive engagement with research work from the National Gallery, this type of presentation could also be pushed through to allow people to see where samples came from, but this is still to be explored. Thank you very much.