Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WordPress-based Crawler Implementation #131

Draft
wants to merge 4 commits into
base: docs/extend
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 11 additions & 1 deletion .eslintrc
Original file line number Diff line number Diff line change
Expand Up @@ -22,5 +22,15 @@
"no-console": [
"off"
]
}
},
"overrides": [
{
"files": [
"src/crawler/**/*.ts"
],
"rules": {
"react/no-is-mounted": "off"
}
}
]
}
2 changes: 1 addition & 1 deletion EXTEND.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ the future or any third-party plugin to transform the data upon installation of
## Storage Architecture

Try WordPress stores all liberated data in a custom post type called `liberated_data`, exposed via a constant:
`\DotOrg\TryWordPress\Engine::STORAGE_POST_TYPE`.
`\DotOrg\TryWordPress\Engine::LIBERATED_DATA_POST_TYPE`.

We maintain references between the source data and transformed output using two post meta keys:

Expand Down
261 changes: 261 additions & 0 deletions src/crawler/crawler.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,261 @@
import { CommandTypes, sendCommandToContent } from '@/bus/Command';

interface CrawlerState {
isActive: boolean;
nextProcessTime: number;
rateLimit: number;
}

interface QueueUrlsResponse {
accepted: number;
rejected: number;
queueSize: number;
crawledCount: number;
}

interface NextUrlResponse {
url: string;
}

interface QueueUrlsRequest {
urls: string[];
sourceUrl: string;
}

class Crawler {

Check failure on line 25 in src/crawler/crawler.ts

View workflow job for this annotation

GitHub Actions / lint

'Crawler' is defined but never used
private readonly state: CrawlerState;
private process: ( html: string ) => Promise< void >;

constructor() {
this.state = {
isActive: false,
nextProcessTime: 0,
rateLimit: 1.0, // pages per sec; 1.0 means 1000ms delay between delays
};
// Initialize with empty process function
this.process = async () => {};
}

private log( level: 'log' | 'warn' | 'error', ...args: any[] ): void {
console[ level ]( ...args );
}

// Allow setting the process function
public setProcessFunction(
processFn: ( html: string ) => Promise< void >
): void {
this.process = processFn;
}

public async start(): Promise< void > {
if ( this.state.isActive ) {
this.log( 'log', 'Crawler already running' );
return;
}

this.state.isActive = true;
this.log( 'log', 'Crawler started' );

while ( this.state.isActive ) {
const next = await this.getNextUrl();
if ( next ) {
await this.processUrl( next );
} else {
this.state.isActive = false;
this.log( 'log', 'Crawler finished' );
}
}
}

private async processUrl( url: string ): Promise< void > {
this.log( 'log', 'processing url', url );
try {
// Wait until we're allowed to process the next URL
await this.waitForRateLimit();

await this.navigateToUrl( url );

// @TODO: Get the HTML content via bus?
const html = document.documentElement.outerHTML;

// Process the page content
await this.process( html );

// Extract and queue new URLs
const links = this.extractLinks( html );
await this.queueUrls( links, url );
} catch ( error ) {
this.log( 'error', 'Error processing URL', url, error );
this.state.isActive = false;
}
}

private async waitForRateLimit(): Promise< void > {
const now = Date.now();
const delayMs = 1000 / this.state.rateLimit; // Convert rate limit to milliseconds between requests

if ( now < this.state.nextProcessTime ) {
await new Promise( ( resolve ) =>
setTimeout( resolve, this.state.nextProcessTime - now )
);
}

// Calculate next allowed process time using the delay
this.state.nextProcessTime = now + delayMs;
}

private extractLinks( htmlString: string ): string[] {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a similar plumbing in PHP:

	$p = new WP_Block_Markup_Url_Processor( $options['block_markup'], $options['base_url'] );
	while ( $p->next_url() ) {
		$parsed_url = $p->get_parsed_url();
		foreach ( $url_mapping as $mapping ) {
			if ( url_matches( $parsed_url, $mapping['from_url'] ) ) {
				$p->replace_base_url( $mapping['to_url'] );
				break;
			}
		}
	}

See how it also matches the domains and paths to stay within the same site. It might be handy to delegate that work to PHP.

// Create a DOM parser instance
const parser = new DOMParser();

// Parse the HTML string into a document
const doc = parser.parseFromString( htmlString, 'text/html' );

// Find all anchor tags
const linkElements = doc.querySelectorAll( 'a' );

// Convert NodeList to Array and extract link data
const links = Array.from( linkElements ).map( ( link ) => {
// Get the href attribute
const href = link.getAttribute( 'href' );

// Skip if no href, or it's a javascript: link or anchor link
if (
! href ||
href.startsWith( 'javascript:' ) ||
href.startsWith( '#' )
) {
return null;
}

// Try to resolve relative URLs to absolute
let absoluteUrl;
try {
absoluteUrl = new URL( href, window.location.origin ).href;
} catch ( e ) {
// If URL parsing fails, use the original href
absoluteUrl = href;
}

const isExternal = link.hostname !== window.location.hostname;
if ( isExternal ) {
return null;
}

return absoluteUrl;
} );

// Filter out null values and return unique links
return links
.filter( ( link ) => link !== null )
.filter(
( link, index, self ) =>
index === self.findIndex( ( l ) => l === link )
);
}

private async queueUrls(
Copy link

@adamziel adamziel Nov 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're building just that in the PHP plugin! :-) There are concurrent requests, I'm exploring resource limits, and if we run it in Playground, we'll still do the downloads via fetch() which means we benefit from authorized cookies.

urls: string[],
sourceUrl: string,
retryCount = 0,
maxRetries = 5
): Promise< QueueUrlsResponse > {
const request: QueueUrlsRequest = {
urls,
sourceUrl,
};

const response = await fetch( '/crawl-api/queue-urls', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify( request ),
} );

if ( ! response.ok ) {
this.log(
'warn',
`Attempt ${
retryCount + 1
}/${ maxRetries } failed: HTTP error! status: ${
response.status
}`
);

if ( retryCount >= maxRetries - 1 ) {
return Promise.reject(
new Error(
`Failed to queue URLs after ${ maxRetries } attempts`
)
);
}

// Wait before retrying
await this.sleep();

// Recursive call
return this.queueUrls( urls, sourceUrl, retryCount++, maxRetries );
}

return response.json();
}

private async sleep( ms: number = 1000 ): Promise< void > {
return new Promise( ( resolve ) => setTimeout( resolve, ms ) );
}

private async getNextUrl(
retryCount = 0,
maxRetries = 5
): Promise< string | null > {
const response = await fetch( '/crawl-api/next-url' );

// crawling queue is finished
if ( response.status === 204 ) {
return null;
}

if ( ! response.ok ) {
this.log(
'warn',
`Attempt ${
retryCount + 1
}/${ maxRetries } failed: HTTP error! status: ${
response.status
}`
);

if ( retryCount >= maxRetries - 1 ) {
return Promise.reject(
new Error(
`Failed to get next URL after ${ maxRetries } attempts`
)
);
}

// Wait before retrying
await this.sleep();

// Recursive call
return this.getNextUrl( retryCount++, maxRetries );
}

const data: NextUrlResponse = await response.json();
return data.url;
}

private async navigateToUrl( url: string ): Promise< void > {
void sendCommandToContent( {
type: CommandTypes.NavigateTo,
payload: { url },
} );
}

public stop(): void {
this.state.isActive = false;
}

public updateRateLimit( newLimit: number ): void {
// only allow between 0.1 and 10 pages per second - no reason for this limit; feel free to change
this.state.rateLimit = Math.max( 0.1, Math.min( 10.0, newLimit ) );
}
}
32 changes: 32 additions & 0 deletions src/plugin/class-controller-registry.php
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
<?php

namespace DotOrg\TryWordPress;

class Controller_Registry {

public function __construct( string $liberated_data_post_type, string $crawler_queue_post_type ) {
new Blogpost_Controller( $liberated_data_post_type );
new Page_Controller( $liberated_data_post_type );

$domain = $this->infer_domain( $liberated_data_post_type );

new Crawler_Controller( $domain, $crawler_queue_post_type );
}

private function infer_domain( $liberated_data_post_type ): string {
$liberated_posts = get_posts(
array(
'post_type' => $liberated_data_post_type,
'posts_per_page' => 1,
'post_status' => 'draft',
)
);

if ( ! empty( $liberated_posts ) ) {
$domain = wp_parse_url( $liberated_posts[0]->guid, -1 );
return $domain['scheme'] . '://' . $domain['host'];
}

return '';
}
}
Loading
Loading