the-events-calendar
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home3/aijournc/public_html/wp-includes/functions.php on line 6114rocket
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home3/aijournc/public_html/wp-includes/functions.php on line 6114pods
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home3/aijournc/public_html/wp-includes/functions.php on line 6114Knowledge graphs hit the peak<\/a> of their Gartner hype cycle in 2020. And much as KG teams celebrated the rise to prominence, \u201chype\u201d carries an implicit risk. What if the touted promise never emerges? After all, network-based databases (including graph DBs) have been in use since the 1960s, just largely confined to academic settings. What\u2019s working for graph adoption today? To jump to the present day, knowledge-as-a-service has grown an ecosystem of players providing services at a range of layers.\u00a0<\/p>\n\n\n\n Build your own knowledge graph: <\/p>\n\n\n\n Utilize an existing knowledge graph: <\/p>\n\n\n\n It\u2019s also worth noting that these two categories aren\u2019t mutually exclusive. Plenty of companies seed, enrich, or update knowledge graphs they\u2019ve built with existing external knowledge graphs. Additionally, existing knowledge graphs are a particularly well suited format for giving machine learning models context; machine learning models that can in turn help augment or populate internal knowledge graphs with additional (custom) fields. <\/p>\n\n\n\n Whether building, utilizing an external service or pursuing a hybrid approach, there are two high-level issues that most knowledge projects must wrangle with. The first involves the fact that meaningful knowledge graphs are often of considerable scale. This is the issue of price (and correspondingly process) for fact accumulation. The next involves the actionable output: how do we get existing knowledge in front of the right person at the right time (knowledge workflows)?\u00a0<\/p>\n\n\n\n As mentioned previously, knowledge graphs have been utilized in academic and research settings for a number of decades (beginning in the 1970\u2019s<\/a>). While this has been useful for sketching out the theoretical strengths and weaknesses of this form of data modeling, few academic knowledge graphs were of any size to be useful in commercial settings. <\/p>\n\n\n\n As our ability to quickly store and retrieve data increased exponentially, several enterprise knowledge base companies attempted to build databases of a size useful for a wide range of enterprise applications. In the early 1980\u2019s, a long term project called Cyc was compiled. The aim: create a knowledge base large enough to remedy the issue of AI systems with an initial promise that falters due to a lack of ongoing data sources. Cyc was able to compile around 21M fields manually (and at a great cost).\u00a0<\/p>\n\n\n\n In the 1990s and with the proliferation of the internet, (unstructured) data was no longer an issue, with exponential data growth each and every year. But unstructured web data doesn\u2019t equate to useful information (let alone \u201cknowledge\u201d \u2013 particularly on topics of enterprise interest). Metaweb attempted to remedy this through crowdsourcing knowledge. Building off of Wikipedia, they were able to increase Cyc\u2019s field count 100x before being acquired and folded into Google.\u00a0<\/p>\n\n\n\n Together these two enterprise-focused projects exemplified both the promise of enterprise knowledge graphs, and the shortcoming of manual fact accumulation. Because there\u2019s a relatively hard cap on how quickly humans can curate answers, there\u2019s a hard floor on how low you can drop the price of manual fact accumulation. You need more people or more time, both of which mean more money. Powerful web crawlers, natural language processing, and machine vision with machine learning computations on top have managed to massively increase the size of enterprise-use knowledge bases for a fraction of the price. A huge range of useful enterprise data is available online as well as in the news. It simply needs to be structured. <\/p>\n\n\n\n Among general use knowledge graphs, GDELT<\/a> and Diffbot<\/a> provide the largest coverage areas. Where GDELT automates the organization of facts from worldwide newspapers, Diffbot crawls the entire web and pulls out entities and facts. <\/p>\n\n\n\n Natural language processing pushed to layers one through three listed earlier in this article also enable enterprises to pick and choose where to source their knowledge graphs from. The following sources are commonly parsed into enterprise knowledge graphs: <\/p>\n\n\n\n
For starters, what do we mean by the \u201cknowledge\u201d in knowledge-as-a-service?
In common parlance, knowledge has come to be known as information<\/strong> and context<\/strong>. This is one of the roots of the term \u201cknowledge graph.\u201d While a graph is simply a database structure composed of nodes (entities) and edges (relationships), knowledge graphs are populated to provide information and context.\u00a0<\/p>\n\n\n\nThe Cost of Fact Accumulation<\/h2>\n\n\n\n
Today the internet has grown significantly. And we do have examples of massive knowledge bases compiled by humans (IMDB, Yelp). But they tend to need to be about topics that are of widespread interest and shared knowledge. Another dead end for large-scale manual knowledge graphs of enterprise use. So how do we go about structuring enterprise knowledge efficiently today? <\/p>\n\n\n\n