<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>MIRALAB Advances in 3D Computer Graphic Reconstruction &#8211; ENIGMA EU</title>
	<atom:link href="https://eu-enigma.eu/tag/miralab-advances-in-3d-computer-graphic-reconstruction/feed/" rel="self" type="application/rss+xml" />
	<link>https://eu-enigma.eu</link>
	<description>Endorsing Safeguarding, Protection &#38; Provenance Management of Cultural Heritage</description>
	<lastBuildDate>Wed, 03 Dec 2025 13:06:55 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Advances in 3D Computer Graphic Reconstruction</title>
		<link>https://eu-enigma.eu/2024/04/04/miralab-exploring-the-cutting-edge/</link>
		
		<dc:creator><![CDATA[enigma_admin]]></dc:creator>
		<pubDate>Thu, 04 Apr 2024 10:52:00 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[MIRALAB Advances in 3D Computer Graphic Reconstruction]]></category>
		<guid isPermaLink="false">https://eu-enigma.eu/?p=26492</guid>

					<description><![CDATA[by MIRALAB
Exploring the Cutting Edge -Advances in 3D Computer Graphic Reconstruction for Cultural Heritage Preservation]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="26492" class="elementor elementor-26492">
						<section class="elementor-section elementor-top-section elementor-element elementor-element-8957c8d elementor-section-boxed elementor-section-height-default elementor-section-height-default" data-id="8957c8d" data-element_type="section" data-e-type="section" data-settings="{&quot;background_background&quot;:&quot;classic&quot;}">
						<div class="elementor-container elementor-column-gap-thegem"><div class="elementor-row">
					<div class="elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-04cf9b4" data-id="04cf9b4" data-element_type="column" data-e-type="column">
			<div class="elementor-widget-wrap elementor-element-populated">
						<section class="elementor-section elementor-inner-section elementor-element elementor-element-0d6929b elementor-section-boxed elementor-section-height-default elementor-section-height-default" data-id="0d6929b" data-element_type="section" data-e-type="section">
						<div class="elementor-container elementor-column-gap-thegem"><div class="elementor-row">
					<div class="elementor-column elementor-col-100 elementor-inner-column elementor-element elementor-element-3bdf342" data-id="3bdf342" data-element_type="column" data-e-type="column">
			<div class="elementor-widget-wrap elementor-element-populated">
						<div class="elementor-element elementor-element-ce3806a flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-heading" data-id="ce3806a" data-element_type="widget" data-e-type="widget" data-widget_type="heading.default">
				<div class="elementor-widget-container">
					<div class="title-h3 light elementor-heading-title elementor-size-default">Preserving and documenting </div>				</div>
				</div>
				<div class="elementor-element elementor-element-a58f517 flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-heading" data-id="a58f517" data-element_type="widget" data-e-type="widget" data-widget_type="heading.default">
				<div class="elementor-widget-container">
					<div class="title-h1 elementor-heading-title elementor-size-default">historical artifacts, monuments, and sites is crucial for maintaining our global cultural legacy. </div>				</div>
				</div>
				<div class="elementor-element elementor-element-3bc36d0 flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-text-editor" data-id="3bc36d0" data-element_type="widget" data-e-type="widget" data-widget_type="text-editor.default">
				<div class="elementor-widget-container">
												<div class="elementor-text-editor elementor-clearfix">
						However, the challenges posed by incomplete or fragmented information have encouraged the development of sophisticated methods for reconstructing accurate 3D models.
Let&#8217;s have a look at the state of the art in these methodologies, highlighting key techniques and innovations.
							</div>
										</div>
				</div>
					</div>
		</div>
					</div></div>
		</section>
				<div class="elementor-element elementor-element-0ff7b84 flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-heading" data-id="0ff7b84" data-element_type="widget" data-e-type="widget" data-widget_type="heading.default">
				<div class="elementor-widget-container">
					<div class="title-h3 light elementor-heading-title elementor-size-default"><strong> 1. 3D Reconstruction from A Single Image</strong></div>				</div>
				</div>
				<div class="elementor-element elementor-element-65a3286 flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-text-editor" data-id="65a3286" data-element_type="widget" data-e-type="widget" data-widget_type="text-editor.default">
				<div class="elementor-widget-container">
												<div class="elementor-text-editor elementor-clearfix">
						<figure id="attachment_26506" aria-describedby="caption-attachment-26506" style="width: 1024px" class="wp-caption alignnone"><img class="size-large wp-image-26506" src="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-1a-1024x638.jpg" alt="" width="1024" height="638" srcset="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-1a-1024x638.jpg 1024w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-1a-300x187.jpg 300w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-1a-768x479.jpg 768w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-1a-1536x957.jpg 1536w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-1a-2048x1276.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption id="caption-attachment-26506" class="wp-caption-text">Figure 1.Overview of the method</figcaption></figure><p>This technique transforms the 2D image into a three-dimensional representation. The method employs Cross-Domain Diffusion, a process where information is shared and diffused between different domains, enhancing the depth perception of the image.<br />By taking advantage of this technique, the system can deduce the three-dimensional structure of objects in the image, even if the original input was in a two-dimensional format.<br />This innovative approach opens the way for creating immersive 3D experiences from simple images, simplifying the transition from 2D to 3D in computer graphics.</p><p><i> <b>Reference</b>: Long, Xiaoxiao, et al. &#8220;Wonder3d: Single image to 3d using cross-domain diffusion.&#8221; arXiv preprint arXiv:2310.15008 (2023).</i></p>							</div>
										</div>
				</div>
				<div class="elementor-element elementor-element-d029aa0 flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-heading" data-id="d029aa0" data-element_type="widget" data-e-type="widget" data-widget_type="heading.default">
				<div class="elementor-widget-container">
					<div class="title-h3 light elementor-heading-title elementor-size-default"><strong> 2. 3D Reconstruction from multi-view images to Point Cloud: CG-MVSNet</strong></div>				</div>
				</div>
				<div class="elementor-element elementor-element-47e3306 flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-text-editor" data-id="47e3306" data-element_type="widget" data-e-type="widget" data-widget_type="text-editor.default">
				<div class="elementor-widget-container">
												<div class="elementor-text-editor elementor-clearfix">
						<p>A sophisticated method in computer vision which involves taking multiple images of an object or scene from different viewpoints and then employing GC-MVSNet, a specialized algorithm, to reconstruct a detailed and accurate three-dimensional point cloud representation. GC-MVSNet algorithm excels at handling complex scenes, utilizing global context information to refine the reconstruction and improve the overall quality of the generated point cloud.</p><p><img class="size-large wp-image-26499" src="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-2-1024x357.jpg" alt="" width="1024" height="357" srcset="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-2-1024x357.jpg 1024w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-2-300x105.jpg 300w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-2-768x268.jpg 768w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-2-1536x536.jpg 1536w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-2-2048x715.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /> Figure 2. Overall Architecture</p><p>By combining information from multiple views, this approach enhances the depth perception and spatial accuracy, making it a powerful tool for creating realistic 3D models from a set of 2D images.</p><p><img class="wp-image-26514 size-large" src="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-3a-1024x638.jpg" alt="" width="1024" height="638" srcset="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-3a-1024x638.jpg 1024w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-3a-300x187.jpg 300w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-3a-768x479.jpg 768w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-3a-1536x957.jpg 1536w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-3a-2048x1276.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></p><p><em><strong>Reference</strong>: Vats, Vibhas K., et al. &#8220;GC-MVSNet: Multi-View, Multi-Scale, Geometrically-Consistent Multi-View Stereo.&#8221; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024.</em></p>							</div>
										</div>
				</div>
				<div class="elementor-element elementor-element-2f73ec7 flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-heading" data-id="2f73ec7" data-element_type="widget" data-e-type="widget" data-widget_type="heading.default">
				<div class="elementor-widget-container">
					<div class="title-h3 light elementor-heading-title elementor-size-default"><strong> 3. 3D Reconstruction from multi-view images to 3D mesh: RayAug
</strong></div>				</div>
				</div>
				<div class="elementor-element elementor-element-d3768c7 flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-text-editor" data-id="d3768c7" data-element_type="widget" data-e-type="widget" data-widget_type="text-editor.default">
				<div class="elementor-widget-container">
												<div class="elementor-text-editor elementor-clearfix">
						<figure id="attachment_26496" aria-describedby="caption-attachment-26496" style="width: 1024px" class="wp-caption alignnone"><img class="wp-image-26496 size-large" src="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-4-1024x357.jpg" alt="" width="1024" height="357" srcset="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-4-1024x357.jpg 1024w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-4-300x105.jpg 300w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-4-768x268.jpg 768w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-4-1536x536.jpg 1536w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-4-2048x715.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption id="caption-attachment-26496" class="wp-caption-text">Figure 4. Some results</figcaption></figure><p>This process involves capturing images of an object or scene from different angles and employing RayAug, a specialized algorithm, to convert the information into a detailed and textured 3D mesh.<br />RayAug optimizes the reconstruction by considering lighting and shading effects, resulting in a more realistic and visually appealing 3D representation. This method is particularly effective in creating accurate and intricate 3D models by leveraging the information gathered from multiple views, offering a powerful tool for generating lifelike 3D meshes from a set of 2D images.</p><p><em><strong>Reference:</strong> Yao, Jiawei, et al. &#8220;Geometry-guided ray augmentation for neural surface reconstruction with sparse views.&#8221; arXiv preprint arXiv:2310.05483 (2023).</em></p>							</div>
										</div>
				</div>
				<div class="elementor-element elementor-element-15048e2 flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-heading" data-id="15048e2" data-element_type="widget" data-e-type="widget" data-widget_type="heading.default">
				<div class="elementor-widget-container">
					<div class="title-h3 light elementor-heading-title elementor-size-default"><strong> 4. 3D Reconstruction from Damaged Object: MendNet, Pix2Repair
</strong></div>				</div>
				</div>
				<div class="elementor-element elementor-element-f96496e flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-text-editor" data-id="f96496e" data-element_type="widget" data-e-type="widget" data-widget_type="text-editor.default">
				<div class="elementor-widget-container">
												<div class="elementor-text-editor elementor-clearfix">
						<h3>MENDNET</h3><p>When dealing with objects that have missing or damaged parts, MendNet utilizes advanced algorithms to reconstruct a complete and accurate 3D model. This process involves analyzing the available information from various viewpoints and intelligently filling in the gaps or repairing damaged areas in the object&#8217;s structure.</p><figure id="attachment_26495" aria-describedby="caption-attachment-26495" style="width: 1024px" class="wp-caption alignnone"><img class="size-large wp-image-26495" src="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-5-1024x638.jpg" alt="" width="1024" height="638" srcset="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-5-1024x638.jpg 1024w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-5-300x187.jpg 300w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-5-768x479.jpg 768w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-5-1536x957.jpg 1536w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-5-2048x1276.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption id="caption-attachment-26495" class="wp-caption-text">Figure 5. MendNet: From fractured 3D points</figcaption></figure><p><em><strong>Reference:</strong> Lamb, Nikolas, Sean Banerjee, and Natasha K. Banerjee. &#8220;Mendnet: Restoration of fractured shapes using learned occupancy functions.&#8221; Computer Graphics Forum. Vol. 41. No. 5. 2022.</em></p><h3>Pix2Repair</h3><p>An advanced approach in computer graphics which facilitates 3D Reconstruction of objects that have missing or impaired sections. This method involves analyzing available visual data from different angles and intelligently filling in the missing or damaged parts of the object. Pix2Repair&#8217;s ability to restore the geometry of damaged objects makes it a valuable tool in the field of CH.</p><figure id="attachment_26494" aria-describedby="caption-attachment-26494" style="width: 1024px" class="wp-caption alignnone"><img class="size-large wp-image-26494" src="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-6-1024x638.jpg" alt="" width="1024" height="638" srcset="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-6-1024x638.jpg 1024w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-6-300x187.jpg 300w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-6-768x479.jpg 768w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-6-1536x957.jpg 1536w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-6-2048x1276.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption id="caption-attachment-26494" class="wp-caption-text">Figure 6. Pix2Repair: From a single image (Left: Input Images, Middle: Restorations, Right: Ground Truths)</figcaption></figure>							</div>
										</div>
				</div>
				<div class="elementor-element elementor-element-d4facd9 flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-heading" data-id="d4facd9" data-element_type="widget" data-e-type="widget" data-widget_type="heading.default">
				<div class="elementor-widget-container">
					<div class="title-h3 light elementor-heading-title elementor-size-default"><strong>5.	Open Datasets: 
</strong></div>				</div>
				</div>
				<div class="elementor-element elementor-element-a9f1ed5 flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-text-editor" data-id="a9f1ed5" data-element_type="widget" data-e-type="widget" data-widget_type="text-editor.default">
				<div class="elementor-widget-container">
												<div class="elementor-text-editor elementor-clearfix">
						<p>Tools and platforms that enable interactive and collaborative reconstruction efforts are emerging. Crowdsourced data, combined with expert contributions, transforms the reconstruction process into a joint effort. The following section presents some of the most advanced Open dataset of 3D scans of real world broken objects in cultural heritage.</p><h3>Platform / SHREC 2021</h3><p>SHREC 2021 provides a platform for researchers to showcase their advancements in the field of 3D shape retrieval. Participants utilize innovative methods to efficiently retrieve and match 3D models of cultural heritage objects, such as sculptures, artifacts, and monuments. The goal is to enhance the accuracy and effectiveness of retrieving relevant cultural heritage items from vast 3D shape databases.</p><figure id="attachment_26493" aria-describedby="caption-attachment-26493" style="width: 1024px" class="wp-caption alignnone"><img class="size-large wp-image-26493" src="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-7-1024x638.jpg" alt="" width="1024" height="638" srcset="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-7-1024x638.jpg 1024w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-7-300x187.jpg 300w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-7-768x479.jpg 768w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-7-1536x957.jpg 1536w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-7-2048x1276.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption id="caption-attachment-26493" class="wp-caption-text">Figure 7. 3D scanned cultural heritage objects</figcaption></figure><p><em> Reference: Sipiran, Ivan, et al. &#8220;SHREC 2021: Retrieval of cultural heritage objects.&#8221; Computers &amp; Graphics 100 (2021): 1-20</em></p><h3>Dataset and methods / Pix3d</h3><p>Pix3D is a dataset and methodology designed for modeling 3D shapes from a single image. The dataset includes images representing objects in various interior scenes, together with the corresponding 3D models. Researchers and developers use Pix3D to train and test algorithms capable of generating 3D shapes from a single 2D image. The methods employed involve teaching machines to understand the spatial structure of objects in photographs and translating this knowledge into precise three-dimensional representations.</p><figure id="attachment_26555" aria-describedby="caption-attachment-26555" style="width: 1024px" class="wp-caption alignnone"><img class="size-large wp-image-26555" src="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-8-1024x638.jpg" alt="" width="1024" height="638" srcset="https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-8-1024x638.jpg 1024w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-8-300x187.jpg 300w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-8-768x479.jpg 768w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-8-1536x957.jpg 1536w, https://eu-enigma.eu/wp-content/uploads/2024/05/miralab-article-8-2048x1276.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption id="caption-attachment-26555" class="wp-caption-text">Figure 8. 2D images &#8211; 3D shapes</figcaption></figure><p> </p><h3>Dataset / Fantastic Breaks</h3><p>Fantastic Breaks is a unique dataset that consists of paired 3D scans of real-world broken objects along with their complete, undamaged counterparts. This dataset is invaluable in the field of computer vision and 3D reconstruction as it provides a diverse collection of objects that have undergone various types of damage.<br />Each pair of scans allows researchers and developers to study and train algorithms on reconstructing the original, intact state of objects from their broken versions. This dataset is particularly useful for advancing techniques related to object restoration, damage analysis, and understanding how 3D reconstruction algorithms perform in challenging scenarios involving damaged objects.</p><p><b>R<i>eference</i></b><i>: Sun, Xingyuan, et al. &#8220;Pix3d: Dataset and methods for single-image 3d shape modeling.&#8221; Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.</i></p>							</div>
										</div>
				</div>
				<div class="elementor-element elementor-element-a20eaed flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-heading" data-id="a20eaed" data-element_type="widget" data-e-type="widget" data-widget_type="heading.default">
				<div class="elementor-widget-container">
					<div class="title-h3 light elementor-heading-title elementor-size-default"><strong>Conclusion</strong></div>				</div>
				</div>
				<div class="elementor-element elementor-element-10998c8 flex-horizontal-align-default flex-horizontal-align-tablet-default flex-horizontal-align-mobile-default flex-vertical-align-default flex-vertical-align-tablet-default flex-vertical-align-mobile-default elementor-widget elementor-widget-text-editor" data-id="10998c8" data-element_type="widget" data-e-type="widget" data-widget_type="text-editor.default">
				<div class="elementor-widget-container">
												<div class="elementor-text-editor elementor-clearfix">
						
The state of the art reflects a dynamic landscape of innovation. From advanced algorithms to the integration of cutting-edge technologies such as deep learning, computer vision and sensor fusion, researchers continue to push the boundaries of what is achievable to reconstruct lost cultural heritage, preserve and understand our rich cultural legacy.							</div>
										</div>
				</div>
					</div>
		</div>
					</div></div>
		</section>
				</div>
		]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
