Batch URL Error 413
-
Hello, i am using the free mozscape api to get the domain and page authority of urls. i am batching the url as shown in the sample code however i have over 500 URL i need to check and when running the request i get the following:
stdClass Object ( [status] => 413 [error_message] => Too many urls in batch. Batches must be less than or equal to 200 urls. )
When i change it to 200 urls the request works fine, now is there a way to enable me to batch all 500 urls at once? i did read that the beta api is capable of batching urls in one request : http://moz.com/blog/400-higher-throughput-mozscape-api-now-in-beta-and-seeking-testers
Has this been implemented yet into the current api?
Thanks
-
Hi Paul!
I have confirmed with our engineers the free api limit for batched URLs is 50 and 200 for paid access. This will go into effect during our next update next month.
-
Hi Paul!
This is a great question! You might be part of a loophole as we actually went with 200 URLs for paid users and free users are limited to 10 URLs. I am checking with our engineering team to find out exactly what the limits are.
Here are the current rates according to our wiki: http://apiwiki.moz.com/best-practices
Once I hear back from them I will let you know what I find out!
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved "The account does not have access to that quota" - API v1 URL Metrics
Hi! On one of our servers we get the message "The account does not have access to that quota" when pulling data for v1 URL Metrics endpoint in the API.
API | | aheitzman
This only happens on one specific server, others have worked flawlessly. Any idea of what might be going on? Thanks0 -
Location Data Batch Updates via the MOZ API
According to the MOZ API documentation, I am able to update multiple Locations in a batch in order to create or update their location data, currently 130 locations. I have successfully created a batch and the API returned the $id, as I expected. Yet, it doesn't make clear how the multiple locations I want to update are supposed to be sent back to the API. I was expecting an upload of a CSV file or JSON data, and not Query Parameter as noted in the docs. When including a JSON file, properly formatted, as a binary upload, the response is still expecting a locations parameter. See error here: { "status":"MISSING_PARAMETER", "message":"locations missing. ", "response": { "locations":"MISSING_PARAMETER" } } https://mza.bundledseo.com/developers/docs/local/api#_api_batch__id-PATCH
API | | yuca.pro1 -
API url_metrics Error deserializing POST body
Getting error in https://lsapi.seomoz.com/v2/url_metrics API I'm using Basic Auth for authentication. { "name": "Bad Request", "message": "Error deserializing POST body"} CVW2T4f
API | | verdet32321 -
Crawl error HELP!
Hi. I have two campaigns that are constantly giving crawl errors, but in Search Console I don't. How can I do to fix this? The URLs are: https://www.starmeup.com/blog/en/ and https://www.starmeup.com/blog/es/ Thanks
API | | lala20172 -
Do you fetch website titles from paid api https://moz.com/help/guides/moz-api/mozscape/api-reference/url-metrics?
We are using one of your API: https://mza.bundledseo.com/help/guides/moz-api/mozscape/api-reference/url-metrics on our website and it does not show up the title for each website.
API | | SOSCreatives
But when I see the title of the same website through your extension Moz Bar, then it does show the title from the extension of that website. Can you tell me what is missing here?0 -
803 Crawl attempt error
Hello I'd be very grateful for any advice with this: My insights show I have an 803 error. Now, under "pages with crawl attempt error" the page in question is just an uploaded image to wordpress. However, above the graph it says: "We were unable to access your homepage, which prevented us from crawling the rest of your site. It is likely that other browsers as well as search engines may encounter this problem and abort their sessions." Does this really mean my homepage? or is the only issue with the image? I have noticed for the past 8 weeks I'm getting 1 crawl attempt error every 2 weeks (so when viewed weekly I have 1 error one week, 0 error the next week etc) Is this normal? Since receiving this 803 error, I have significantly dropped in SERPS for 3 key terms I was on page 1 for (now dropped to pages 3-4). Could this be related? I realise this is a bit specific, but thanks in advance. Cheers 🙂
API | | wearehappymedia0 -
On-Page Reports showing old urls
While taking a look at our sites on-page reports I noticed some of our keywords with very old urls that haven't existed for close to a year. How do I make sure moz's keyword ranking is finding the correct page and make sure I'm not getting graded on that keywords/urls that don't exist any more or have been 301'd to new urls? Is there a way to clean these out? My on-page reports say I have 62 reports for only a total of 34 keywords in rankings. As you can see from the image most of the urls for "tax folder" have now been 301'd to not include /product or /category but moz is still showing them with the old url structure. BTW our site is minespress.com 2KdGcPL.png
API | | smines0 -
Huge in crease in on page errors
Hi guys I’ve just checked my online campaign and I see errors in my crawl diagnostics have almost doubled from the 21<sup>st</sup> of October to the 25<sup>th</sup> of October going from 6708 errors to 11 599. Can anyone tell me what may have caused this? Also I notice we have a lot of issues with duplicate page titles which seems strange as no new pages have been added, can anyone explain why this might be? I look forward to hearing from you
API | | Hardley1110