Scrapebox Automator Plugin – Automate Scrapebox!

Scrapebox automation is no longer a dream it is not a reality.  The automator plugin allows you to automate the most common scrapebox tasks.

You can scrape keyword, scrape urls, test proxies, post content, send trackbacks, check links and more, all one right after the other.

The way it works is you create a job in the plugin.  The job lists out in order what scrapebox is supposed to do.  You can customize various things just like in scrapebox.

When you are done you save the job and then you go to the automator menu in scrapebox.  Simply load the job and scrapebox does the rest.

Its important to understand that the automator is not software that “does” the work, it simply create a “list” if you will, of things for scrapebox to do.  So scrapebox works just the same and the functions in the automator are the same because its using scrapebox to do the work.

Dupe Remove Addon for Scrapebox – Remove duplicate urls in large quantities

The scrapebox dupe remove addon allows you to load up many files and then merge them into 1 file.

After you have merged the files, you then have the option to remove either duplicate urls or duplicate domains.  Duplicate urls gets rid of any urls that are duplicates but keeps multiple urls from the same domain, so long as they are unique.  Removing duplicate domains gets rid of all duplicate urls from a given domain, keeping only 1 url from each domain.

You also have the option to split files into smaller segments.  This is nice because it allows you to work with smaller files as well as it allows you custom control of the size of your list. pieces.

The addon works on a line by line argument so you can remove dupe lines not just urls.

Comment Posting Success in Scrapebox – Private Proxies vs Public Proxies vs no Proxies

Private Proxies vs Public Proxies vs No Proxies for Scrapebox posting

I have many times heard people questioning or arguing about what is the best option to use when posting comments with scrapebox.

Should you use Public proxies with higher connections?

Should you use Private proxies with lower connections?

Should you use no proxies?

Here is what I did.  I took a list I scraped of 650K urls and ran the same list on 3 different isntance of scrapebox, all on the same server, all side by side at the same time.  I did randomize the 650K list for each instance separately, so that I wouldn’t be hitting the same urls at the same time in each instance.

Public Proxies – 350 connections

I got 600 public proxies that were all either medium or fast speed in scrapebox, when tested with the proxy tester.  I ran it at 350 connections. This run completed 2nd in overall speed of finishing the task.

Finished first at 573 minutes

72K Successful Urls

 

Private Proxies from IPfreely – 90 connections

I ran the same 650K list with proxies from IPFreely.  I ran this run at 90 connections.

Finished 2nd at 1018 minutes

199K Successful Urls

 

No Proxies – 150 connections

I ran the same 650K list with no proxies, the server that these were all run on doesn’t care about complaints, so I ran this list with no proxies.  I chose 150 connections for this instance.

Finished last for overall time at 1218 minutes.

174K Successful Urls

 

Conclusion

There is no solid conclusion from this test persay, as its not an equal split test.  The point of this test was just to give various information.

We can see that no proxies and private proxies finished fairly closely on success rates, and that no proxies is quite a bit slower.  No proxies had quite a bit higher connections and still finished slower then with private proxies.  I won’t go into depth as to why this is, and frankly I don’t have a highly descriptive answer.  However its easy enough to see that the DNS of the server running with no proxies is slower then the combined DNS resolving of the various proxy servers.

Overall I am just presenting various information so you can more easily make your overall evaluation about what to use.

Scrapebox Addons IPC system update – addons attach to the instance they are launched from

The scrapebox developers overhauled the addon system and now addons attach to the instance that they are launched from.

In the past they would attach to whichever instance windows would randomly choose to attach it too.

In order to take advantage of this new update you need to upgrade to scrapebox version 1.15.55 or higher and you need to reinstall all of your addons.  In the video I go over how to actually reinstall all of your addons in one easy step.

This is a great help for many of use, myself included.  I have often found my addons attaching to an instance I didn’t want to use data from.

Scrapebox Domain Name Lookup Tool – Check to see if a domain is avilable

The Domain name lookup tool in scrapebox is a pretty handy tool, especially since it is tied to the keyword scraper.

The advantage of being tied to the keyword scraper is that you can start with seed keywords and then scrape from things like google suggest and yahoo search assistant etc…  Then you can take those keywords and easily check to see if they are available.  You get the option of checking for multiple tlds including .com, .org and more.

A lot of people don’t know that this handy tool exists because it is tucked away inside the keyword scraper.  However this is the most ideal spot for it, because you can get a lot of excellent ideas from the keyword scraper.

Google index checker and also bing index checker in Scrapebox

The index checker in Scrapebox allows you to load in a list of urls in the urls harvested section.  After you do that you choose to check indexed and then choose if you want to check to see if urls are indexed in google or bing.

Using proxies is generally a good idea when checking if sites are indexed in google or bing.  Also after you check to see if pages are indexed in google or bing you can then export non indexed pages or you can export only the indexed urls.  You also have sorting options.

This can be a great way to quickly find non indexed urls or links and then export them and do additional work to get your links indexed.