This article is featured in Bitcoin Magazine\u2019s<\/em> \u201cThe Inscription Issue\u201d. Click <\/em>here<\/a><\/em> to get your Annual Bitcoin Magazine Subscription.<\/strong><\/p>\n Click here<\/a> to download a PDF of this article.<\/em><\/strong><\/p>\n Data is the most liquid commodity market in the world. In the smartphone era, unless extreme precautions are taken, everywhere you go, everything you say, and everything you consume is quantifiable among the infinite spectrum of the information goods markets. Information goods, being inherently nonphysical bits of data, can be conceptualized, crafted, produced, or manufactured, disseminated, and consumed exclusively as digital entities. The internet, along with other digital technologies for computation and communication, serves as a comprehensive e-commerce infrastructure, facilitating the entire life cycle of designing, producing, distributing, and consuming a wide array of information goods. The seamless transition of existing information goods from traditional formats to digital formats is easily achievable, not to mention the collection of media formats completely infeasible in the analog world.<\/p>\n A preliminary examination of products within the information goods industry reveals that, while they all exist as pure information products and are uniformly impacted by technological advancements, their respective markets undergo distinct economic transformation processes. These variations in market evolution are inherently tied to differences in product characteristics, production methods, distribution channels, and consumption patterns. Notably, the separation of value creation and revenue processes introduces opportunistic scenarios, potentially leaving established market players with unprofitable customer bases and costly yet diminishing value-creation processes.<\/p>\n Simultaneously, novel organizational architectures may emerge in response to evolving technological conditions, effectively creating and destroying traditional information good markets overnight. The value chains, originally conceived under the assumptions of the traditional information goods economy, undergo radical redesigns as new strategies and tooling materialize in response to the transformative influence of digital production, distribution, and consumption on conventional value propositions for data. For example, mass surveillance was never practical when creating even a single photo meant hours of labor within a specialized photo development room with specific chemical and lightning conditions. Now that there is a camera on every corner, a microphone in every pocket, a ledger entry for every financial transaction, and the means to transmit said data essentially for free across the planet, the market conditions for mass surveillance have unsurprisingly given rise to mass surveillance as a service.<\/p>\n An entirely new industry of \u201clocation firms\u201d has grown, with The Markup<\/em> having demarcated nearly 50 companies selling location data as a service in a 2021 article titled \u201cThere\u2019s a Multibillion-Dollar Market for Your Phone\u2019s Location Data\u201d by Keegan and Ng. One such firm, Near, is self-described as curating \u201cone of the world\u2019s largest sources of intelligence on People and Places\u201d, having gathered data representing nearly two billion people across 44 countries. According to a Grand View Research<\/em> report titled \u201cLocation Intelligence Market Size And Share Report, 2030\u201d, the global location intelligence data market cap was worth an estimated \u201c$16.09 billion in 2022 and is projected to grow at a compound annual growth rate (CAGR) of 15.6% from 2023 to 2030\u201d. The market cap of this new information goods industry is mainly \u201cdriven by the growing penetration of smart devices and increasing investments in IoT [internet of things] and network services as it facilitates smarter applications and better network connectivity\u201d, giving credence to the idea that technological advancement front-runs network growth which front-runs entirely new forms of e-commerce markets. This, of course, was accelerated by the COVID-19 pandemic, in which government policies resulted in \u201cthe increased adoption of location intelligence solutions to manage the changing business scenario as it helps businesses to analyze, map, and share data in terms of the location of their customers\u201d, under the guise of user and societal health.<\/p>\n Within any information goods market, there are only two possible outcomes for market participants: distributing the acquired data or keeping it for yourself. <\/p>\n <\/a> In the fall of 2021, China launched the Shanghai Data Exchange (SDE) in an attempt to create a state-owned monopoly on a novel speculative commodities market for data scraped from one of the most digitally surveilled populations on the planet. The SDE offered 20 data products at launch, including customer flight information from China Eastern Airlines, as well as data from telecommunications network operators such as China Unicom, China Telecom, and China Mobile. Notably, one of the first known trades made at the SDE was the Commercial Bank of China purchasing data from the state-owned Shanghai Municipal Electric Power Company under the guise of improving their financial services and product offerings. <\/p>\n Shortly before the founding of this data exchange, Huang Qifan, the former mayor of Chongqing, was quoted saying that \u201cthe state should monopolize the rights to regulate data and run data exchanges\u201d, while also suggesting that the CCP should be highly selective in setting up data exchanges. \u201cLike stock exchanges, Beijing, Shanghai and Shenzhen can have one, but a general provincial capital city or a municipal city should not have it.\u201d <\/p>\n While the current information goods market has led to such innovations such as speculation on the purchasing of troves of user data, the modern data market was started in earnest at the end of the 1970s, exemplified in the formation of Oracle Corporation in 1977, named after the CIA\u2019s \u201cProject Oracle\u201d, which featured eventual Oracle Corporation co-founders Larry Ellison, Robert Miner, and Ed Oates. The CIA was their first customer, and in 2002, nearly $2.5 billion worth of contracts came from selling software to federal, state, and local governments, accounting for nearly a quarter of their total revenue. Only a few months after September 11, 2001, Ellison penned an op-ed for The<\/em> New York Times <\/em>titled \u201cA Single National Security Database\u201d in which the opening paragraph reads \u201cThe single greatest step we Americans could take to make life tougher for terrorists would be to ensure that all the information in myriad government databases was copied into a single, comprehensive national security database\u201d. Ellison was quoted in Jeffrey Rosen\u2019s book The Naked Crowd<\/em> as saying \u201cThe Oracle database is used to keep track of basically everything. The information about your banks, your checking balance, your savings balance, is stored in an Oracle database. Your airline reservation is stored in an Oracle database. What books you bought on Amazon is stored in an Oracle database. Your profile on Yahoo! is stored in an Oracle database\u201d. Rosen made note of a discussion with David Carney, a former top-three employee at the CIA, who, after 32 years of service at the agency, left to join Oracle just two months after 9\/11 to lead its Information Assurance Center:<\/p>\n “How do you say this without sounding callous?” [Carney] asked. “In some ways, 9\/11 made business a bit easier. Previous to 9\/11 you pretty much had to hype the threat and the problem.” Carney said that the summer before the attacks, leaders in the public and private sectors wouldn’t sit still for a briefing. Then his face brightened. “Now they clamor for it!”<\/p>\n This relationship has continued for 20 years, and in November 2022, the CIA awarded its Commercial Cloud Enterprise contract to five American companies \u2014 Amazon Web Services, Microsoft, Google, IBM, and Oracle. While the CIA did not disclose the exact value of the contract, documents released in 2019 suggested it could be \u201ctens of billions\u201d of dollars over the next 15 years. Unfortunately, this is far from the only data market integration of the private sector, government agencies, and the intelligence community, perhaps best exemplified by data broker LexisNexis.<\/p>\n LexisNexis was founded in 1970, and is, as of 2006, the world\u2019s largest electronic database for legal and public-records-related information. According to their own website, LexisNexis describes themselves as delivering \u201ca comprehensive suite of solutions to arm government agencies with superior data, technology and analytics to support mission success\u201d. LexisNexis consists of nine board members: CEO Haywood Talcove; Dr. Richard Tubb, the longest serving White House physician in U.S. history; Stacia Hylton, former Deputy Director of the U.S. Marshal Service; Brian Stafford, former Director of the U.S. Secret Service; Lee Rivas, CEO for the public sector and health care business units of LexisNexis Risk Solutions; Howard Safir, former NYPD Commissioner and Associate Director of Operations for the U.S. Marshals Service; Floyd Clarke, former Director of the FBI; Henry Udow, Chief Legal Officer and Company Secretary for the RELX Group; and lastly Alan Wade, retired Chief Information Officer for the CIA.<\/p>\n While Wade was still employed by the CIA, he founded Chiliad with Christine Maxwell, sister of Ghislaine Maxwell, and daughter of Robert Maxwell. Christine Maxwell is considered \u201can early internet pioneer\u201d, having founded Magellan in 1993, one of the premier search engines on the internet. After selling Magellan to Excite, she reinvested her substantial windfall into another big data search technology company: the aforementioned Chiliad. According to a 2020 report by OYE.NEWS, Chiliad made use of \u201con-demand, massively scalable, intelligent mining of structured and unstructured data through the use of natural language search technologies\u201d, with the firm\u2019s proprietary software being \u201cbehind the data search technology used by the FBI\u2019s counterterrorism data warehouse\u201d.<\/p>\n As recently as November 2023, the Wade-connected LexisNexis was given a $16-million, five-year contract with the U.S. Customs and Border Protection \u201cfor access to a powerful suite of surveillance tools\u201d, according to available public records, providing access to \u201csocial media monitoring, web data such as email addresses and IP address locations, real-time jail booking data, facial recognition services, and cell phone geolocation data analysis tools\u201d. Unfortunately, this is far from the only government agency to utilize LexisNexis\u2019 data brokerage with the aims of circumnavigating constitutional law and civil liberties in regards to surveillance. <\/p>\n In the fall of 2020, LexisNexis was forced to settle for over $5 million after a class action lawsuit alleged the broker sold Department of Motor Vehicle data to U.S. law firms, who were then free to use it for their own business purposes. “Defendants websites allow the purchase of crash reports by report date, location, or driver name and payment by credit card, prepaid bulk accounts or monthly accounts\u201d, the complaint reads. “Purchasers are not required to establish any permissible use provided in the DPPA to obtain access to Plaintiffs’ and Class Members’ MVRs\u201d. In the summer of 2022, a Freedom of Information Act request revealed a $22 million contract between Immigration and Customs Enforcement and LexisNexis. Sejal Zota, a director at Just Futures Law and a practicing attorney working on the lawsuit, made note that LexisNexis makes it possible for ICE to “instantly access sensitive personal data \u2014 all without warrants, subpoenas, any privacy safeguards or any show of reasonableness\u201d.<\/p>\n In the aforementioned complaint from 2022, the use of LexisNexis\u2019 Accurint product allows “law enforcement officers [to] surveil and track people based on information these officers would not, in many cases, otherwise be able to obtain without a subpoena, court order, or other legal process\u2026enabling a massive surveillance state with files on almost every adult U.S. consumer\u201d.<\/p>\n In 2013, it came to the public\u2019s attention that the National Security Agency had covertly breached the primary communication links connecting Yahoo and Google data centers worldwide. This information was based on documents published by WikiLeaks, originally obtained from former NSA contractor Edward Snowden, and corroborated by interviews of government officials.<\/p>\n As per a classified report dated January 9, 2013, the NSA transmits millions of records daily from internal Yahoo and Google networks to data repositories at the agency’s Fort Meade, Maryland headquarters. In the preceding month, field collectors processed and returned 181,280,466 new records, encompassing “metadata” revealing details about the senders and recipients of emails, along with time stamps, as well as the actual content, including text, audio, and video data.<\/p>\n The primary tool employed by the NSA to exploit these data links is a project named MUSCULAR, carried out in collaboration with the British Government Communications Headquarters (GCHQ). Operating from undisclosed interception points, the NSA and GCHQ copy entire data streams through fiber-optic cables connecting the data centers of major Silicon Valley corporations. <\/p>\n This becomes particularly perplexing when considering that, as revealed by a classified document acquired by The Washington Post<\/em> in 2013, both the NSA and the FBI were already actively tapping into the central servers of nine prominent U.S. internet companies. This covert operation involved extracting audio and video chats, photographs, emails, documents, and connection logs, providing analysts with the means to monitor foreign targets. The method of extraction, as outlined in the document, involves direct collection from the servers of major U.S. service providers: Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, and Apple.<\/p>\n During the same period, the newspaper The Guardian <\/em>reported that GCHQ \u2014 the British counterpart to the NSA \u2014 was clandestinely gathering intelligence from these internet companies through a collaborative effort with the NSA. According to documents obtained by The Guardian<\/em>, the PRISM program seemingly allows GCHQ to bypass the formal legal procedures required in Britain to request personal materials such as emails, photos, and videos, from internet companies based outside the country.<\/p>\n PRISM emerged in 2007 as a successor to President George W. Bush’s secret program of warrantless domestic surveillance, following revelations from the news media, lawsuits, and interventions by the Foreign Intelligence Surveillance Court. Congress responded with the Protect America Act in 2007 and the FISA Amendments Act of 2008, providing legal immunity to private companies cooperating voluntarily with U.S. intelligence collection. Microsoft became PRISM’s inaugural partner, marking the beginning of years of extensive data collection beneath the surface of a heated national discourse on surveillance and privacy.<\/p>\n In a June 2013 statement, then-Director of National Intelligence James R. Clapper said \u201cinformation collected under this program is among the most important and valuable foreign intelligence information we collect, and is used to protect our nation from a wide variety of threats. The unauthorized disclosure of information about this important and entirely legal program is reprehensible and risks important protections for the security of Americans\u201d.<\/p>\n So why the need for collection directly from fiber optic cables if these private companies themselves <\/em>are already providing data to the national intelligence community? Upon further inquiry into the aforementioned data brokers to the NSA and CIA, it would appear that a vast majority of the new submarine fiber optic cables \u2014 essential infrastructure to the actualization of the internet as a global data market \u2014 are being built out by these same private companies. These inconspicuous cables weave across the global ocean floor, transporting 95-99% of international data through bundles of fiber-optic strands scarcely thicker than a standard garden hose. In total, the active network comprises over 1,100,000 kilometers of submarine cables.<\/p>\n Traditionally, these cables have been owned by a consortium of private companies, primarily telecom providers. However, a notable shift has emerged. In 2016, a significant surge in submarine cable development began, and notably, this time, the purchasers are content providers \u2014 particularly the data brokers Meta\/Facebook, Google, Microsoft, and Amazon. Of note is Google, having acquired over 100,000 kilometers of submarine cables. With the completion of the Curie Cable in 2019, Google’s ownership of submarine cables globally stands at 1.4%, as measured by length. When factoring in cables with shared ownership, Google’s overall share increases to approximately 8.5%. Facebook is shortly behind with 92,000 kilometers, with Amazon at 30,000, and Microsoft with around 6,500 kilometers from the partially owned MAREA cable. <\/p>\n There is a notable revival in the undersea cable sector, primarily fueled by investments from Facebook and Google, accounting for around 80% of 2018-2020 investments in transatlantic connections \u2014 a significant increase from the less than 20% they accounted for in the preceding three years through 2017, as reported by TeleGeography. This wave of digital giants has fundamentally transformed the dynamics of the industry. Unlike traditional practices where phone companies established dedicated ventures for cable construction, often connecting England to the U.S. for voice calls and limited data traffic, these internet companies now wield considerable influence. They can dictate the cable landing locations, strategically placing them near their data centers, and have the flexibility to modify the line structures \u2014 typically costing around $200 million for a transatlantic link \u2014 without waiting for partner approvals. These technology behemoths aim to capitalize on the increasing demand for rapid data transfers essential for various applications, including streaming movies, social messaging, and even telemedicine.<\/p>\n The last time we saw such an explosion of activity in building out essential internet infrastructure was during the dot-com boom of the 1990s, in which phone companies spent over $20 billion to install fiber-optic lines beneath the oceans, immediately before the massive proliferation of personal computers, home internet modems, and peer-to-peer data networks.<\/p>\n The birthing of new compression technologies in the form of digital media formats itself would not have given rise to the panopticon we currently operate under without the ability to obfuscate mass uploading and downloading of this newly created data via the ISP rails of both public and private sector infrastructure companies. There is likely no accident that the creation of these tools, networks, and algorithms were created under the influence of national intelligence agencies right before the turn of the millennium, the rise of broadband internet, and the sweeping unconstitutional spying on citizens made legal via the Patriot Act in the aftermath of the events on September 11, 2001. <\/p>\n Only 15 years old, Sean Parker, the eventual founder of Napster and first president of Facebook \u2014 a former DARPA project titled LifeLog \u2014 caught the gaze of the FBI for his hacking exploits, ending in state-appointed community ser\u00advice. One year later, Parker was recruited by the CIA after winning a Virginia state computer science fair by developing an early internet crawling application. Instead of continuing his studies, he interned for a D.C. startup, FreeLoader, and eventually UUNet, an internet service provider. \u201cI wasn\u2019t going to school,\u201d Parker told Forbes<\/em>. \u201cI was technically in a co-op program but in truth was just going to work.\u201d Parker made nearly six figures his senior year of high school, eventually starting the peer-to-peer music-sharing site that became Napster in 1999. While working on Napster, Parker met investor Ron Conway, who has backed every Parker product since, having also previously backed PayPal, Google, and Twitter, among others. Napster has been credited as one of the fastest-growing businesses of all time, and its influence on information goods and data markets in the internet age cannot be overstated.<\/p>\n In a study conducted between April 2000 and November 2001 by Sandvine titled \u201cPeer-to-peer File Sharing: The Impact of File Sharing on Service Provider Networks\u201d, network measurements revealed a notable shift in bandwidth consumption patterns due to the launch of new peer-to-peer tooling, as well as new compression algorithms such as .MP3. Specifically, the percentage of network bandwidth attributed to Napster traffic saw an increase from 23% to 30%, whereas web-related traffic experienced a slight decrease from 20% to 19%. By 2002, observations indicated that file-sharing traffic was consuming a substantial portion, up to 60%, of internet service providers’ bandwidth. The creation of new information good markets comes downstream of new technological capabilities, with implications on the scope and scale of current data stream proliferation, clearly noticeable within the domination of internet user activity belonging to peer-to-peer network communications.<\/p>\n Of course, peer-to-peer technology did not cease to advance after Napster, and the invention of \u201cswarms\u201d, a style of downloading and uploading essential to the development of Bram Cohen\u2019s BitTorrent, were invented for eDonkey2000 by Jed McCaleb \u2014 the eventual founder of Mt.Gox, Ripple Labs, and the Stellar Foundation. The proliferation of advanced packet exchange over the internet has led to entirely new types of information good markets, essentially boiling down to three main axioms; public and permanent data, selectively private data, and coveted but difficult-to-obtain data. <\/p>\n <\/a> While publishing directly to Bitcoin is hardly a new phenomenon, the popularization of Ord \u2014 released by Bitcoin developer Casey Rodarmor in 2022 \u2014 has led to a massive increase in interest and activity in Bitcoin-native publishing. While certainly some of this can be attributed to a newly formed artistic culture siphoning away activity and value from Ethereum \u2014 and other alternative businesses making erroneous claims of blockchain-native publishing \u2014 the majority of this volume comes downstream from the construction of these inscription transactions that use the SegWit discount via specially authored Taproot script, and the awareness of the immutability, durability, and availability of data offered solely by the Bitcoin blockchain. The SegWit discount was specifically created to incentivize the consolidation of unspent transaction outputs and limit the creation of excessive change in the UTXO set, but as for its implications on Bitcoin-native publishing, it has essentially created a substantial 75% markdown on the cost of bits within a block that are stuffed with arbitrary data within an inscription. This is far from a non-factor in the creation of a sustainable information goods market.<\/p>\n Taking this one step further, the implementation of a self-referential inscription mechanism allows users to string data publishing across multiple Bitcoin blocks, limiting the costs from fitting a file into a single block auction. This implies both the ability to inscribe files beyond 4 MB, as well as the utility to reference previously inscribed material, such as executable software, code for generative art, or the image assets themselves. In the case of the recent Project Spartacus, recursive inscriptions that use what is known as a parent inscription were used in order to allow essentially a crowdfunding mechanism in order to publicly source the satoshis needed to publish the Afghan War logs onto the Bitcoin blockchain forever. This solves for the need of public and permanent publishing of known and available data by a pseudonymous set of users, but requires certain data availability during the minting process itself, which opens the door to centralized pressure points and potential censoring of inscription transactions within a public mint by nefarious mining pools.<\/p>\n With the advent of Bitcoin-native inscriptions, the possibility of immutable, durable, and censorship-reduced publishing has come to fruition. The current iteration of inscription technology allows for users to post their data via a permanent but publicly propagated Bitcoin transaction. However, this reality has led to yet-to-be confirmed inscription transactions and their associated data being noticed while within the mempool itself. This issue can be mitigated by introducing encryption within the inscription process, leaving encrypted but otherwise innocuous data to be propagated by Bitcoin nodes and eventually published by Bitcoin miners, but with no ability to be censored due to content. This also removes the ability for inscriptions meant for speculation to be front-run by malicious collectors who pull inscription data from the mempool and rebroadcast it at an increased fee rate in order to be confirmed sooner.<\/p>\n Precursive inscriptions aim to create the private, encrypted publishing of data spread out over multiple Bitcoin blocks that can be published at a whim via a recursive publishing transaction containing the private key to decrypt the previously inscribed data. For instance, a collective of whistleblowers could discreetly upload data to the Bitcoin blockchain, unbeknownst to miners or node runners, while deferring its publication until a preferred moment. Since the data will be encrypted during its initial inscribing phase, and since the data will be seemingly uncorrelated until it is recursively associated by the publishing transaction, a user can continually resign and propagate the time-locked parent inscription for extended durations of time. If the user cannot sign a further time-locked publishing transaction due to incarceration, the propagated publishing transaction will be confirmed after the time-lock period ends, thus giving the publisher a dead man\u2019s switch mechanism.<\/p>\n The specially authored precursive inscription process presented in this article offers a novel approach to secure and censorship-resistant data publishing within the Bitcoin blockchain. By leveraging the inherent characteristics of the Bitcoin network, such as its decentralized and immutable nature, the method described here addresses several key challenges in the field of information goods, data inscription, and dissemination. The primary objective of precursive inscriptions is to enhance the security and privacy of data stored on the Bitcoin blockchain, while also mitigating the risk of premature disclosure. One of the most significant advantages of this approach is its ability to ensure that the content remains concealed until the user decides to reveal it. This process not only provides data security but also maintains data integrity and permanence within the Bitcoin blockchain.<\/p>\n This leads us to the third and final fork of the information good data markets needed for the modern age; setting the price for wanted but currently unobtained bits.<\/p>\n ReQuest aims to create a novel data market allowing users to issue bounties for coveted data, seeking the secure and immutable storage of specific information on the Bitcoin blockchain. The primary bounty serves a dual role by covering publishing costs and rewarding those who successfully fulfill the request. Additionally, the protocol allows for the increase of bounties through contributions from other users, increasing the chances of successful fulfillment. Following an inscription submission, users who initiated the bounty can participate in a social validation process to verify the accuracy of the inscribed data.<\/p>\n Implementing this concept involves a combination of social vetting to ensure data accuracy, evaluating contributions to the bounty, and adhering to specific contractual parameters measured in byte size. The bounty fulfillment process requires eligible fulfillers to submit their inscription transaction hash or a live magnet link for consideration. In cases where the desired data is available but not natively published on Bitcoin \u2014 or widely known but currently unavailable, such as a renowned .STL file or a software client update \u2014 the protocol offers an alternative method to social consensus for fulfillment, involving hashing the file and verifying the resulting SHA-256 output, which provides a foolproof means of meeting the bounty’s requirements. The collaborative nature of these bounties, coupled with their ability to encompass various data types, ensures that ReQuest’s model can effectively address a broad spectrum of information needs in the market.<\/p>\n For ReQuest bounties involving large file sizes unsuitable for direct inscription on the Bitcoin blockchain, an alternative architecture known as Durabit has been proposed, in which a BitTorrent magnet link is inscribed and its seeding is maintained through a Bitcoin-native, time-locked incentive structure. <\/p>\n Durabit aims to incentivize durable, large data distribution in the information age. Through time-locked Bitcoin transactions and the use of magnet links published directly within Bitcoin blocks, Durabit encourages active long-term seeding while even helping to offset initial operational costs. As the bounty escalates, it becomes increasingly attractive for users to participate, creating a self-sustaining incentive structure for content distribution. The Durabit protocol escalates the bounty payouts to provide a sustained incentive for data seeding. This is done not by increasing rewards in satoshi terms, but rather by increasing the epoch length between payouts exponentially, leveraging the assumed long-term price increase due to deflationary economic policy in order to keep initial distribution costs low. Durabit has the potential to architect a specific type of information goods market via monetized file sharing and further integrate Bitcoin into the decades-long, peer-to-peer revolution. <\/p>\n These novel information good markets actualized by new Bitcon-native tooling can potentially reframe the fight for publishing, finding, and upholding data as the public square continues to erode.<\/p>\n The information war is fought on two fronts; the architecture that incentivizes durable and immutable public data publishing, and the disincentivization of the large-scale gathering of personal data \u2014 often sold back to us in the form of specialized commercial content or surveilled by intelligence to aid in targeted propaganda, psychological operations, and the restriction of dissident narratives and publishers. The conveniences offered by walled garden apps and the private-sector-in-name-only networks are presented in order to access troves of metadata from real users. While user metrics can be inflated, the data gleaned from these bots are completely useless to data harvesting commercial applications such as Language Learning Models (LLMs) and current applicable AI interfaces. <\/p>\n There are two axioms in which these algorithms necessitate verifiable data; the authenticity of the model\u2019s code itself, and the selected input it inevitably parses. As for the protocol itself, in order to ensure replicability of desired features and mitigate any harmful adversarial functionality, techniques such as hashing previously audited code upon publishing state updates could be utilized. Dealing with the input of these LLMs\u2019 learning fodder is seemingly also two-pronged; cryptographic sovereignty over that data which is actually valuable to the open market, and the active jamming of signal fidelity with data-chaff. It is perhaps not realistic to expect your everyday person to run noise-generating APIs that constantly feed the farmed, public datasets with heaps of lossy data, causing a data-driven feedback on these self-learning algorithms. But by creating alternative data structures and markets, built to the qualities of the specific \u201cinformation good\u201d, we can perhaps incentivize \u2014 at least subsidize \u2014 the perceived economic cost of everyday people giving up their convenience. The trend of deflation of publishing costs via digital and the interconnectivity of the internet has made it all the more essential for everyday people to at least take back control of their own metadata. <\/p>\n It is not simply data that is the new commodity of the digital age, but your<\/em> data: where you have been, what you have purchased, who you talk to, and the many manipulated whys that can be triangulated from the aforementioned wheres, whats, and whos. By mitigating the access to this data via obfuscation methods such as using VPNs, transacting with private payment tools, and choosing hardware powered by certain open source software, users can meaningfully increase the cost needed for data harvesting by the intelligence community and its private sector compatriots. The information age requires engaged participants, incentivized by the structures upholding and distributing the world\u2019s data \u2014 their data \u2014 on the last remaining alcoves of the public square, as well as encouraged and active retention of our own information.<\/p>\n Most of the time, a random, large number represented in bits is of little value to a prospective buyer. And yet Bitcoin\u2019s store-of-value property is derived entirely from users being able to publicly and immutably publish a signature to the blockchain, possible only from the successful keeping of a private key secret. A baselayer Bitcoin transaction fee is priced not by the amount of value transferred, but by how many bytes of space is required in a specific block to articulate all its spend restrictions, represented in sat\/vbyte. Bitcoin is a database that manages to incentivize users replicating its ledger, communicating its state updates, and utilizing large swaths of energy to randomize its consensus model. <\/p>\n Every ten minutes, on average, another 4 MB auction. <\/p>\n If you want information to be free, give it a free market.\u00a0<\/p>\n This article is featured in Bitcoin Magazine\u2019s<\/em> \u201cThe Inscription Issue\u201d. Click <\/em>here<\/a><\/em> to get your Annual Bitcoin Magazine Subscription.<\/strong><\/p>\nThe Value Of Bits<\/h2>\n
\n Click the image above to subscribe!<\/em><\/p>\nThe Modern Information Goods Market<\/h2>\n
A Series Of Tubes<\/h2>\n
Data Laundering <\/h2>\n
\n Click the image above to download a PDF of the article.\u00a0<\/em><\/p>\nBitcoin-native Data Markets<\/h2>\n
Parent\/Child Recursive Inscriptions<\/h3>\n
Precursive Inscriptions<\/h3>\n
ReQuest<\/h3>\n
Durabit<\/h3>\n
Increasing The Cost Of Conspiracy<\/h2>\n