While the promise and transformative power of generative AI use cases may be appealing, implementing the technology can be a tricky balancing act, as enterprise-wide strategies around data, operations, and talent are fundamental to any integrated strategy. Specifically, real estate companies should take the following factors into consideration:
Data Strategy and Model Validation
“Location, location, location” is no longer the sole determinant of strategic advantage in real estate. Companies are increasingly realizing that “accurate, timely and comprehensive data” holds the key to building competitive advantage. This is especially true now as emerging technologies such as generative AI revolutionize how we interact with data. Building differentiated, company-owned data sets and making data-driven decisions can be the unique hallmark that differentiates a company from its competitors.
Foundational models like LLM are trained on general information found online. However, real estate use cases may require the training data to include market-, company-, and asset-specific information to mitigate the risk of artifacts or bias in the model. However, a lack of publicly available information on leases, tenant data, or the operational performance of individual assets can make it difficult to access sufficient, timely, and high-quality information to train these models.
Before embarking on generative AI adoption, real estate companies should assess the overall AI maturity of the organization's technology infrastructure and consider whether they currently have access to the quality data needed to fine-tune and train models. Transformation owners should assign leaders to defined roles, including data governance, quality, and ethics. Companies should choose a data governance framework to ensure data is trustworthy, protected, and compliant.
Generative AI relies on underlying models that are trained on large amounts of both structured and unstructured, unlabeled data. Generative AI applications can leverage self-supervision techniques, reducing the cost of annotation.22 Front-end applications and prompts will also be more user-friendly and can use natural language for interactions, making the technology more democratized and accessible across organizations.
Counterfactual data or outdated information can lead to misleading outputs, which can lead to reputational and financial risks, including legal liability. If the changing dynamics and patterns of the real world are not incorporated into the model, or if the training data is not representative and diverse, model results can deteriorate over time. Building explainability into models as to why and how they reach certain conclusions, validating models on a regular basis, and providing a means to provide human feedback to AI models are critical to reducing statistical errors and better understanding model predictions.23
Organizational Culture
To effectively adopt generative AI, companies should consider a well-thought-out roadmap with clearly defined goals and milestones. Executives should identify and prioritize high-impact business use cases, evaluate the value opportunity for generative AI, and involve employees in the quest for value creation. Before making significant investments in solutions or technologies, it is advantageous to first review proofs of concept to ensure feasibility and validity. Embedding a strategy that spans enterprise-wide applications, rather than scattering use cases across business units, can provide competitive differentiation.
Companies should also remember that financial KPIs are not the only indicator of success in implementing generative AI technology, yet non-financial metrics such as increased new tenant acquisition, reduced wait time for property maintenance, tenant satisfaction, cross-selling services, and time savings in payment fulfillment can also be important indicators of success.
Human impact
Depending on the approach – relying on external partners or co-developing AI solutions in-house – companies will need to evaluate the requirements for skilled workforce, the emergence of new roles within the organization such as readiness engineers and fine-tuning experts, jobs that will become unnecessary due to technology integration, etc. Teams will need to act as co-pilots, with humans working in parallel with the technology.
Risks associated with these models may also require upskilling or reskilling of new roles and teams, including compliance, ethics, and data governance. For example, a generative AI application deployed in project management or construction management may require experts with project management domain experience to initially curate a reliable database with a diverse set of information to ensure compliance and work safety on-site and avoid project delays. Generative AI models may also propose building designs that are feasible in the virtual space but unrealistic given real-world manufacturing, zoning, and regulatory compliance, which could have been prevented with the involvement of sector experts. Putting humans at the center of AI decision-making can produce more realistic outcomes and reduce bias and illusions.