Warning Node Has Slots In Importing State
- Warning Node Has Slots In Importing Statement
- Warning Node Has Slots In Importing States
- Warning Node Has Slots In Importing State Park
- Warning Node Has Slots In Importing Staten Island
This page describes methods to import XML dumps. XML Dumps contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc.
The Special:Export page of any MediaWiki site, including any Wikimedia site and Wikipedia, creates an XML file (content dump). See meta:Data dumps and Manual:DumpBackup.php. XML files are explained more on meta:Help:Export.
While importing the package to SQL Server choose Protection Level: Either. 1- Don't save sensitive data. 2- Rely on Server Storage and roles for access control. Screenshot from SSIS Project Package Properties.
What to import?[edit]
- WARNING Node 127.0.0.1:12702 has slots in importing state (3068).
- Variant 1: Drop support for Node.js ≤ 4.4.x and 5.0.0 — 5.9.x. This is the recommended solution nowadays that would imply only minimal overhead. The Node.js 5.x release line has been unsupported since July 2016, and the Node.js 4.x release line reaches its End of Life in April 2018 (→ Schedule).
- $ begingroup$ I can add something about Unreal and glTF, don't know about FBX. Cycles, Eevee, and Unreal each have powerful node-based materials. The glTF 2.0 material model is simpler but can handle a large percentage of game art needs (image-texture-based PBR). GlTF works between Blender and Unreal (as of 4.19) for Static Meshes with Materials.
How to import?[edit]
There are several methods for importing these XML dumps.
Using Special:Import[edit]
Special:Import can be used by wiki users with import
permission (by default this is users in the sysop
group) to import a small number of pages (about 100 should be safe). Trying to import large dumps this way may result in timeouts or connection failures. See meta:Help:Import for a detailed description.[1]
You are asked to give an interwiki prefix. For instance, if you exported from the English Wikipedia, you have to type 'en'.
Changing permissions[edit]
See Manual:User_rights
To allow all registered editors to import (not recommended) the line added to 'LocalSettings.php' would be:
Possible problems[edit]
For using Transwiki-Import PHP safe_mode must be off and 'open_basedir' must be empty (both of them are variables in php.ini). Otherwise the import fails.
If you get error like this:
And Special:Import shows: 'Import failed: Expected <mediawiki> tag, got ', this may be a problem caused by a fatal error on a previous import, which leaves libxml in a wrong state across the entire server, or because another PHP script on the same server disabled entity loader (PHP bug). This happens on MediaWiki versions prior to MediaWiki 1.26, and the solution is to restart the webserver service (apache, etc), or write and execute a script that calls libxml_disable_entity_loader(false);
(see task T86036).
Using importDump.php, if you have shell access[edit]
- Recommended method for general use, but slow for very big data sets. For very large amounts of data, such as a dump of a big Wikipedia, use mwdumper, and import the links tables as separate SQL dumps.
importDump.php
is a command line script located in the maintenance folder of your MediaWiki installation. If you have shell access, you can call importdump.php from within the maintenance folder like this (add paths as necessary):
--user-prefix='
instead --username-prefix='
when importing files. or this:
where dumpfile.xml
is the name of the XML dump file. If the file is compressed and that has a .gz
or .bz2
file extension (but not .tar.gz
or .tar.bz2
), it is decompressed automatically.
Afterwards use ImportImages.php to import the images:
Note: If you have other digital media file types uploaded to your wiki, i.e., .zip, .nxc, .cpp, .py, or .pdf, then you must also backup/export the wiki_prefix_imagelinks table and 'insert' it into the new SQL database table that corresponds with your new MediaWiki. Otherwise, all links referencing these file types will turn up as broken in your wikipages.
Note: If you are using WAMP installation, you can have problems with the importing, due to innoDB settings (by default this engine is disabled in my.ini, so if you don't need problems, use MyIsam engine)
Note: running importDump.php can take quite a long time. For a large Wikipedia dump with millions of pages, it may take days, even on a fast server. Add --no-updates for faster import. Also note that the information in meta:Help:Import about merging histories, etc. also applies.
Note: Optimizing of database after import is recommended: it can reduce database size in two or three times.
After running importDump.php, you may want to run rebuildrecentchanges.php in order to update the content of your Special:Recentchanges page.
FAQ[edit]
- How to setup debug mode?
- Use command line option
--debug
.
- How to make a dry run (no data added to the database)?
- Use command line option
--dry-run
Error messages[edit]
- Typed
- Error
- Cause
Before running importImages.php you first need to change directories to the maintenance folder which has the importImages.php maintence script.
- Error while running MAMP
- Solution
Using specific database credentials
Using importTextFiles.php Maintenance Script[edit]
MediaWiki version: |
MediaWiki version: | ≥ 1.27 |
If you have a lot of content converted from another source (several word processor files, content from another wiki, etc), you may have several files that you would like to import into your wiki. In MediaWiki 1.27 and later, you can use the importTextFiles.php maintenance script.
You can also use the edit.php maintenance script for this purpose.
Using mwdumper[edit]
Apparently, it can't be used to import into MediaWiki 1.31 or later.
mwdumper is a Java application that can be used to read, write and convert MediaWiki XML dumps. It can be used to generate a SQL dump from the XML file (for later use with mysql
or phpmyadmin
) as well as for importing into the database directly. It is a lot faster than importDump.php, however, it only imports the revisions (page contents), and does not update the internal link tables accordingly -- that means that category pages and many special pages will show incomplete or incorrect information unless you update those tables.
If available, you can fill the link tables by importing separate SQL dumps of these tables using the mysql
command line client directly. For Wikimedia wikis, this data is available along with the XML dumps.
Otherwise, you can run rebuildall.php
, which will take a long time, because it has to parse all pages. This is not recommended for large data sets.
Using pywikibot, pagefromfile.py and Nokogiri[edit]
pywikibot is a collection of tools written in python that automate work on Wikipedia or other MediaWiki sites. Once installed on your computer, you can use the specific tool 'pagefromfile.py' which lets you upload a wiki file on Wikipedia or MediaWiki sites. The xml file created by dumpBackup.php can be transformed into a wiki file suitable to be processed by 'pagefromfile.py' using a simple Ruby program similar to the following (here the program will transform all xml files which are on the current directory which is needed if your MediaWiki site is a family):
For example, here is an excerpt of a wiki file output by the command 'ruby dumpxml2wiki.rb' (two pages can then be uploaded by pagefromfile.py, a Template and a second page which is a redirect):
The program accesses each xml file, extracts the texts within <text> </text> markups of each page, searches the corresponding title as a parent and enclosed it with the paired {{-start-}}<!--''Title of the page''--> {{-stop-}} commands used by 'pagefromfile' to create or update a page. The name of the page is in an html comment and separated by three quotes on the same first start line. Please notice that the name of the page can be written in Unicode. Sometimes it is important that the page starts directly with the command, like for a #REDIRECT; thus the comment giving the name of the page must be after the command but still on the first line.
Please remark that the xml dump files produced by dumpBackup.php are prefixed by a namespace:
In order to access the text node using Nokogiri, you need to prefix your path with 'xmlns':
. Nokogiri is an HTML, XML, SAX, & Reader parser with the ability to search documents via XPath or CSS3 selectors from the last generation of XML parsers using Ruby.
Example of the use of 'pagefromfile' to upload the output wiki text file:
How to import logs?[edit]
Exporting and importing logs with the standard MediaWiki scripts often proves very hard; an alternative for import is the script pages_logging.py
in the WikiDAT tool, as suggested by Felipe Ortega.
Troubleshooting[edit]
Merging histories, revision conflict, edit summaries, and other complications[edit]
Interwikis[edit]
If you get the message
the problem is that some pages to be imported have a prefix that is used for interwiki linking. For example, ones with a prefix of 'Meta:' would conflict with the interwiki prefix meta:
which by default links to https://meta.wikimedia.org.
You can do any of the following.
- Remove the prefix from the interwiki table. This will preserve page titles, but prevent interwiki linking through that prefix.
- Example: you will preserve page titles 'Meta:Blah blah' but will not be able to use the prefix 'meta:' to link to meta.wikimedia.org (although it will be possible through a different prefix).
- How to do it: before importing the dump, run the query
DELETE FROM interwiki WHERE iw_prefix='prefix'
(note: do not include the colon in theprefix
). Alternatively, if you have enabled editing the interwiki table, you can simply go to Special:Interwiki and click the 'Delete' link on the right side of the row belonging to that prefix.
- Replace the unwanted prefix in the XML file with 'Project:' before importing. This will preserve the functionality of the prefix as an interlink, but will replace the prefix in the page titles with the name of the wiki where they're imported into, and might be quite a pain to do on large dumps.
- Example: replace all 'Meta:' with 'Project:' in the XML file. MediaWiki will then replace 'Project:' with the name of your wiki during importing.
See also[edit]
- Manual:Configuring_file_uploads#Set_maximum_size_for_file_uploads – May come in handy if you are doing massive imports
- Manual:Errors_and_Symptoms#Fatal_error:_Allowed_memory_size_of_nnnnnnn_bytes_exhausted_.28tried_to_allocate_nnnnnnnn_bytes.29 – Settings that may need to be changed if you are doing massive imports
- Manual:ImportImages.php - for importing images.
References[edit]
- ↑See Manual:XML Import file manipulation in CSharp for a C# code sample that manipulates an XML import file.
This page describes methods to import XML dumps. XML Dumps contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc.
The Special:Export page of any MediaWiki site, including any Wikimedia site and Wikipedia, creates an XML file (content dump). See meta:Data dumps and Manual:DumpBackup.php. XML files are explained more on meta:Help:Export.
What to import?[edit]
How to import?[edit]
There are several methods for importing these XML dumps.
Using Special:Import[edit]
Special:Import can be used by wiki users with import
permission (by default this is users in the sysop
group) to import a small number of pages (about 100 should be safe). Trying to import large dumps this way may result in timeouts or connection failures. See meta:Help:Import for a detailed description.[1]
You are asked to give an interwiki prefix. For instance, if you exported from the English Wikipedia, you have to type 'en'.
Changing permissions[edit]
See Manual:User_rights
To allow all registered editors to import (not recommended) the line added to 'LocalSettings.php' would be:
Possible problems[edit]
For using Transwiki-Import PHP safe_mode must be off and 'open_basedir' must be empty (both of them are variables in php.ini). Otherwise the import fails.
If you get error like this:
And Special:Import shows: 'Import failed: Expected <mediawiki> tag, got ', this may be a problem caused by a fatal error on a previous import, which leaves libxml in a wrong state across the entire server, or because another PHP script on the same server disabled entity loader (PHP bug). This happens on MediaWiki versions prior to MediaWiki 1.26, and the solution is to restart the webserver service (apache, etc), or write and execute a script that calls libxml_disable_entity_loader(false);
(see task T86036).
Using importDump.php, if you have shell access[edit]
- Recommended method for general use, but slow for very big data sets. For very large amounts of data, such as a dump of a big Wikipedia, use mwdumper, and import the links tables as separate SQL dumps.
importDump.php
is a command line script located in the maintenance folder of your MediaWiki installation. If you have shell access, you can call importdump.php from within the maintenance folder like this (add paths as necessary):
--user-prefix='
instead --username-prefix='
when importing files. or this:
where dumpfile.xml
is the name of the XML dump file. If the file is compressed and that has a .gz
or .bz2
file extension (but not .tar.gz
or .tar.bz2
), it is decompressed automatically.
Afterwards use ImportImages.php to import the images:
Note: If you have other digital media file types uploaded to your wiki, i.e., .zip, .nxc, .cpp, .py, or .pdf, then you must also backup/export the wiki_prefix_imagelinks table and 'insert' it into the new SQL database table that corresponds with your new MediaWiki. Otherwise, all links referencing these file types will turn up as broken in your wikipages.
Note: If you are using WAMP installation, you can have problems with the importing, due to innoDB settings (by default this engine is disabled in my.ini, so if you don't need problems, use MyIsam engine)
Note: running importDump.php can take quite a long time. For a large Wikipedia dump with millions of pages, it may take days, even on a fast server. Add --no-updates for faster import. Also note that the information in meta:Help:Import about merging histories, etc. also applies.
Note: Optimizing of database after import is recommended: it can reduce database size in two or three times.
After running importDump.php, you may want to run rebuildrecentchanges.php in order to update the content of your Special:Recentchanges page.
FAQ[edit]
- How to setup debug mode?
- Use command line option
--debug
.
- How to make a dry run (no data added to the database)?
- Use command line option
--dry-run
Error messages[edit]
- Typed
- Error
- Cause
Before running importImages.php you first need to change directories to the maintenance folder which has the importImages.php maintence script.
- Error while running MAMP
- Solution
Using specific database credentials
Using importTextFiles.php Maintenance Script[edit]
MediaWiki version: |
MediaWiki version: | ≥ 1.27 |
If you have a lot of content converted from another source (several word processor files, content from another wiki, etc), you may have several files that you would like to import into your wiki. In MediaWiki 1.27 and later, you can use the importTextFiles.php maintenance script.
You can also use the edit.php maintenance script for this purpose.
Using mwdumper[edit]
Apparently, it can't be used to import into MediaWiki 1.31 or later.
mwdumper is a Java application that can be used to read, write and convert MediaWiki XML dumps. It can be used to generate a SQL dump from the XML file (for later use with mysql
or phpmyadmin
) as well as for importing into the database directly. It is a lot faster than importDump.php, however, it only imports the revisions (page contents), and does not update the internal link tables accordingly -- that means that category pages and many special pages will show incomplete or incorrect information unless you update those tables.
If available, you can fill the link tables by importing separate SQL dumps of these tables using the mysql
command line client directly. For Wikimedia wikis, this data is available along with the XML dumps.
Otherwise, you can run rebuildall.php
, which will take a long time, because it has to parse all pages. This is not recommended for large data sets.
Using pywikibot, pagefromfile.py and Nokogiri[edit]
pywikibot is a collection of tools written in python that automate work on Wikipedia or other MediaWiki sites. Once installed on your computer, you can use the specific tool 'pagefromfile.py' which lets you upload a wiki file on Wikipedia or MediaWiki sites. The xml file created by dumpBackup.php can be transformed into a wiki file suitable to be processed by 'pagefromfile.py' using a simple Ruby program similar to the following (here the program will transform all xml files which are on the current directory which is needed if your MediaWiki site is a family):
For example, here is an excerpt of a wiki file output by the command 'ruby dumpxml2wiki.rb' (two pages can then be uploaded by pagefromfile.py, a Template and a second page which is a redirect):
The program accesses each xml file, extracts the texts within <text> </text> markups of each page, searches the corresponding title as a parent and enclosed it with the paired {{-start-}}<!--''Title of the page''--> {{-stop-}} commands used by 'pagefromfile' to create or update a page. The name of the page is in an html comment and separated by three quotes on the same first start line. Please notice that the name of the page can be written in Unicode. Sometimes it is important that the page starts directly with the command, like for a #REDIRECT; thus the comment giving the name of the page must be after the command but still on the first line.
Please remark that the xml dump files produced by dumpBackup.php are prefixed by a namespace:
In order to access the text node using Nokogiri, you need to prefix your path with 'xmlns':
. Nokogiri is an HTML, XML, SAX, & Reader parser with the ability to search documents via XPath or CSS3 selectors from the last generation of XML parsers using Ruby.
Example of the use of 'pagefromfile' to upload the output wiki text file:
How to import logs?[edit]
Exporting and importing logs with the standard MediaWiki scripts often proves very hard; an alternative for import is the script pages_logging.py
in the WikiDAT tool, as suggested by Felipe Ortega.
Troubleshooting[edit]
Merging histories, revision conflict, edit summaries, and other complications[edit]
Warning Node Has Slots In Importing Statement
Interwikis[edit]
If you get the message
the problem is that some pages to be imported have a prefix that is used for interwiki linking. For example, ones with a prefix of 'Meta:' would conflict with the interwiki prefix meta:
which by default links to https://meta.wikimedia.org.
Warning Node Has Slots In Importing States
You can do any of the following.
- Remove the prefix from the interwiki table. This will preserve page titles, but prevent interwiki linking through that prefix.
- Example: you will preserve page titles 'Meta:Blah blah' but will not be able to use the prefix 'meta:' to link to meta.wikimedia.org (although it will be possible through a different prefix).
- How to do it: before importing the dump, run the query
DELETE FROM interwiki WHERE iw_prefix='prefix'
(note: do not include the colon in theprefix
). Alternatively, if you have enabled editing the interwiki table, you can simply go to Special:Interwiki and click the 'Delete' link on the right side of the row belonging to that prefix.
- Replace the unwanted prefix in the XML file with 'Project:' before importing. This will preserve the functionality of the prefix as an interlink, but will replace the prefix in the page titles with the name of the wiki where they're imported into, and might be quite a pain to do on large dumps.
- Example: replace all 'Meta:' with 'Project:' in the XML file. MediaWiki will then replace 'Project:' with the name of your wiki during importing.
See also[edit]
- Manual:Configuring_file_uploads#Set_maximum_size_for_file_uploads – May come in handy if you are doing massive imports
- Manual:Errors_and_Symptoms#Fatal_error:_Allowed_memory_size_of_nnnnnnn_bytes_exhausted_.28tried_to_allocate_nnnnnnnn_bytes.29 – Settings that may need to be changed if you are doing massive imports
- Manual:ImportImages.php - for importing images.
References[edit]
Warning Node Has Slots In Importing State Park
- ↑See Manual:XML Import file manipulation in CSharp for a C# code sample that manipulates an XML import file.