Funktionierender Prototyp des Serious Games zur Vermittlung von Wissen zu Software-Engineering-Arbeitsmodellen.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

METADATA 31KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628
  1. Metadata-Version: 2.1
  2. Name: charset-normalizer
  3. Version: 3.2.0
  4. Summary: The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet.
  5. Home-page: https://github.com/Ousret/charset_normalizer
  6. Author: Ahmed TAHRI
  7. Author-email: ahmed.tahri@cloudnursery.dev
  8. License: MIT
  9. Project-URL: Bug Reports, https://github.com/Ousret/charset_normalizer/issues
  10. Project-URL: Documentation, https://charset-normalizer.readthedocs.io/en/latest
  11. Keywords: encoding,charset,charset-detector,detector,normalization,unicode,chardet,detect
  12. Classifier: Development Status :: 5 - Production/Stable
  13. Classifier: License :: OSI Approved :: MIT License
  14. Classifier: Intended Audience :: Developers
  15. Classifier: Topic :: Software Development :: Libraries :: Python Modules
  16. Classifier: Operating System :: OS Independent
  17. Classifier: Programming Language :: Python
  18. Classifier: Programming Language :: Python :: 3
  19. Classifier: Programming Language :: Python :: 3.7
  20. Classifier: Programming Language :: Python :: 3.8
  21. Classifier: Programming Language :: Python :: 3.9
  22. Classifier: Programming Language :: Python :: 3.10
  23. Classifier: Programming Language :: Python :: 3.11
  24. Classifier: Programming Language :: Python :: 3.12
  25. Classifier: Programming Language :: Python :: Implementation :: PyPy
  26. Classifier: Topic :: Text Processing :: Linguistic
  27. Classifier: Topic :: Utilities
  28. Classifier: Typing :: Typed
  29. Requires-Python: >=3.7.0
  30. Description-Content-Type: text/markdown
  31. License-File: LICENSE
  32. Provides-Extra: unicode_backport
  33. <h1 align="center">Charset Detection, for Everyone πŸ‘‹</h1>
  34. <p align="center">
  35. <sup>The Real First Universal Charset Detector</sup><br>
  36. <a href="https://pypi.org/project/charset-normalizer">
  37. <img src="https://img.shields.io/pypi/pyversions/charset_normalizer.svg?orange=blue" />
  38. </a>
  39. <a href="https://pepy.tech/project/charset-normalizer/">
  40. <img alt="Download Count Total" src="https://pepy.tech/badge/charset-normalizer/month" />
  41. </a>
  42. <a href="https://bestpractices.coreinfrastructure.org/projects/7297">
  43. <img src="https://bestpractices.coreinfrastructure.org/projects/7297/badge">
  44. </a>
  45. </p>
  46. > A library that helps you read text from an unknown charset encoding.<br /> Motivated by `chardet`,
  47. > I'm trying to resolve the issue by taking a new approach.
  48. > All IANA character set names for which the Python core library provides codecs are supported.
  49. <p align="center">
  50. >>>>> <a href="https://charsetnormalizerweb.ousret.now.sh" target="_blank">πŸ‘‰ Try Me Online Now, Then Adopt Me πŸ‘ˆ </a> <<<<<
  51. </p>
  52. This project offers you an alternative to **Universal Charset Encoding Detector**, also known as **Chardet**.
  53. | Feature | [Chardet](https://github.com/chardet/chardet) | Charset Normalizer | [cChardet](https://github.com/PyYoshi/cChardet) |
  54. |--------------------------------------------------|:---------------------------------------------:|:------------------------------------------------------------------------------------------------------:|:-----------------------------------------------:|
  55. | `Fast` | ❌<br> | βœ…<br> | βœ… <br> |
  56. | `Universal**` | ❌ | βœ… | ❌ |
  57. | `Reliable` **without** distinguishable standards | ❌ | βœ… | βœ… |
  58. | `Reliable` **with** distinguishable standards | βœ… | βœ… | βœ… |
  59. | `License` | LGPL-2.1<br>_restrictive_ | MIT | MPL-1.1<br>_restrictive_ |
  60. | `Native Python` | βœ… | βœ… | ❌ |
  61. | `Detect spoken language` | ❌ | βœ… | N/A |
  62. | `UnicodeDecodeError Safety` | ❌ | βœ… | ❌ |
  63. | `Whl Size` | 193.6 kB | 40 kB | ~200 kB |
  64. | `Supported Encoding` | 33 | πŸŽ‰ [90](https://charset-normalizer.readthedocs.io/en/latest/user/support.html#supported-encodings) | 40 |
  65. <p align="center">
  66. <img src="https://i.imgflip.com/373iay.gif" alt="Reading Normalized Text" width="226"/><img src="https://media.tenor.com/images/c0180f70732a18b4965448d33adba3d0/tenor.gif" alt="Cat Reading Text" width="200"/>
  67. *\*\* : They are clearly using specific code for a specific encoding even if covering most of used one*<br>
  68. Did you got there because of the logs? See [https://charset-normalizer.readthedocs.io/en/latest/user/miscellaneous.html](https://charset-normalizer.readthedocs.io/en/latest/user/miscellaneous.html)
  69. ## ⚑ Performance
  70. This package offer better performance than its counterpart Chardet. Here are some numbers.
  71. | Package | Accuracy | Mean per file (ms) | File per sec (est) |
  72. |-----------------------------------------------|:--------:|:------------------:|:------------------:|
  73. | [chardet](https://github.com/chardet/chardet) | 86 % | 200 ms | 5 file/sec |
  74. | charset-normalizer | **98 %** | **10 ms** | 100 file/sec |
  75. | Package | 99th percentile | 95th percentile | 50th percentile |
  76. |-----------------------------------------------|:---------------:|:---------------:|:---------------:|
  77. | [chardet](https://github.com/chardet/chardet) | 1200 ms | 287 ms | 23 ms |
  78. | charset-normalizer | 100 ms | 50 ms | 5 ms |
  79. Chardet's performance on larger file (1MB+) are very poor. Expect huge difference on large payload.
  80. > Stats are generated using 400+ files using default parameters. More details on used files, see GHA workflows.
  81. > And yes, these results might change at any time. The dataset can be updated to include more files.
  82. > The actual delays heavily depends on your CPU capabilities. The factors should remain the same.
  83. > Keep in mind that the stats are generous and that Chardet accuracy vs our is measured using Chardet initial capability
  84. > (eg. Supported Encoding) Challenge-them if you want.
  85. ## ✨ Installation
  86. Using pip:
  87. ```sh
  88. pip install charset-normalizer -U
  89. ```
  90. ## πŸš€ Basic Usage
  91. ### CLI
  92. This package comes with a CLI.
  93. ```
  94. usage: normalizer [-h] [-v] [-a] [-n] [-m] [-r] [-f] [-t THRESHOLD]
  95. file [file ...]
  96. The Real First Universal Charset Detector. Discover originating encoding used
  97. on text file. Normalize text to unicode.
  98. positional arguments:
  99. files File(s) to be analysed
  100. optional arguments:
  101. -h, --help show this help message and exit
  102. -v, --verbose Display complementary information about file if any.
  103. Stdout will contain logs about the detection process.
  104. -a, --with-alternative
  105. Output complementary possibilities if any. Top-level
  106. JSON WILL be a list.
  107. -n, --normalize Permit to normalize input file. If not set, program
  108. does not write anything.
  109. -m, --minimal Only output the charset detected to STDOUT. Disabling
  110. JSON output.
  111. -r, --replace Replace file when trying to normalize it instead of
  112. creating a new one.
  113. -f, --force Replace file without asking if you are sure, use this
  114. flag with caution.
  115. -t THRESHOLD, --threshold THRESHOLD
  116. Define a custom maximum amount of chaos allowed in
  117. decoded content. 0. <= chaos <= 1.
  118. --version Show version information and exit.
  119. ```
  120. ```bash
  121. normalizer ./data/sample.1.fr.srt
  122. ```
  123. πŸŽ‰ Since version 1.4.0 the CLI produce easily usable stdout result in JSON format.
  124. ```json
  125. {
  126. "path": "/home/default/projects/charset_normalizer/data/sample.1.fr.srt",
  127. "encoding": "cp1252",
  128. "encoding_aliases": [
  129. "1252",
  130. "windows_1252"
  131. ],
  132. "alternative_encodings": [
  133. "cp1254",
  134. "cp1256",
  135. "cp1258",
  136. "iso8859_14",
  137. "iso8859_15",
  138. "iso8859_16",
  139. "iso8859_3",
  140. "iso8859_9",
  141. "latin_1",
  142. "mbcs"
  143. ],
  144. "language": "French",
  145. "alphabets": [
  146. "Basic Latin",
  147. "Latin-1 Supplement"
  148. ],
  149. "has_sig_or_bom": false,
  150. "chaos": 0.149,
  151. "coherence": 97.152,
  152. "unicode_path": null,
  153. "is_preferred": true
  154. }
  155. ```
  156. ### Python
  157. *Just print out normalized text*
  158. ```python
  159. from charset_normalizer import from_path
  160. results = from_path('./my_subtitle.srt')
  161. print(str(results.best()))
  162. ```
  163. *Upgrade your code without effort*
  164. ```python
  165. from charset_normalizer import detect
  166. ```
  167. The above code will behave the same as **chardet**. We ensure that we offer the best (reasonable) BC result possible.
  168. See the docs for advanced usage : [readthedocs.io](https://charset-normalizer.readthedocs.io/en/latest/)
  169. ## πŸ˜‡ Why
  170. When I started using Chardet, I noticed that it was not suited to my expectations, and I wanted to propose a
  171. reliable alternative using a completely different method. Also! I never back down on a good challenge!
  172. I **don't care** about the **originating charset** encoding, because **two different tables** can
  173. produce **two identical rendered string.**
  174. What I want is to get readable text, the best I can.
  175. In a way, **I'm brute forcing text decoding.** How cool is that ? 😎
  176. Don't confuse package **ftfy** with charset-normalizer or chardet. ftfy goal is to repair unicode string whereas charset-normalizer to convert raw file in unknown encoding to unicode.
  177. ## 🍰 How
  178. - Discard all charset encoding table that could not fit the binary content.
  179. - Measure noise, or the mess once opened (by chunks) with a corresponding charset encoding.
  180. - Extract matches with the lowest mess detected.
  181. - Additionally, we measure coherence / probe for a language.
  182. **Wait a minute**, what is noise/mess and coherence according to **YOU ?**
  183. *Noise :* I opened hundred of text files, **written by humans**, with the wrong encoding table. **I observed**, then
  184. **I established** some ground rules about **what is obvious** when **it seems like** a mess.
  185. I know that my interpretation of what is noise is probably incomplete, feel free to contribute in order to
  186. improve or rewrite it.
  187. *Coherence :* For each language there is on earth, we have computed ranked letter appearance occurrences (the best we can). So I thought
  188. that intel is worth something here. So I use those records against decoded text to check if I can detect intelligent design.
  189. ## ⚑ Known limitations
  190. - Language detection is unreliable when text contains two or more languages sharing identical letters. (eg. HTML (english tags) + Turkish content (Sharing Latin characters))
  191. - Every charset detector heavily depends on sufficient content. In common cases, do not bother run detection on very tiny content.
  192. ## ⚠️ About Python EOLs
  193. **If you are running:**
  194. - Python >=2.7,<3.5: Unsupported
  195. - Python 3.5: charset-normalizer < 2.1
  196. - Python 3.6: charset-normalizer < 3.1
  197. Upgrade your Python interpreter as soon as possible.
  198. ## πŸ‘€ Contributing
  199. Contributions, issues and feature requests are very much welcome.<br />
  200. Feel free to check [issues page](https://github.com/ousret/charset_normalizer/issues) if you want to contribute.
  201. ## πŸ“ License
  202. Copyright Β© [Ahmed TAHRI @Ousret](https://github.com/Ousret).<br />
  203. This project is [MIT](https://github.com/Ousret/charset_normalizer/blob/master/LICENSE) licensed.
  204. Characters frequencies used in this project Β© 2012 [Denny VrandečiΔ‡](http://simia.net/letters/)
  205. ## πŸ’Ό For Enterprise
  206. Professional support for charset-normalizer is available as part of the [Tidelift
  207. Subscription][1]. Tidelift gives software development teams a single source for
  208. purchasing and maintaining their software, with professional grade assurances
  209. from the experts who know it best, while seamlessly integrating with existing
  210. tools.
  211. [1]: https://tidelift.com/subscription/pkg/pypi-charset-normalizer?utm_source=pypi-charset-normalizer&utm_medium=readme
  212. # Changelog
  213. All notable changes to charset-normalizer will be documented in this file. This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
  214. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
  215. ## [3.2.0](https://github.com/Ousret/charset_normalizer/compare/3.1.0...3.2.0) (2023-06-07)
  216. ### Changed
  217. - Typehint for function `from_path` no longer enforce `PathLike` as its first argument
  218. - Minor improvement over the global detection reliability
  219. ### Added
  220. - Introduce function `is_binary` that relies on main capabilities, and optimized to detect binaries
  221. - Propagate `enable_fallback` argument throughout `from_bytes`, `from_path`, and `from_fp` that allow a deeper control over the detection (default True)
  222. - Explicit support for Python 3.12
  223. ### Fixed
  224. - Edge case detection failure where a file would contain 'very-long' camel cased word (Issue #289)
  225. ## [3.1.0](https://github.com/Ousret/charset_normalizer/compare/3.0.1...3.1.0) (2023-03-06)
  226. ### Added
  227. - Argument `should_rename_legacy` for legacy function `detect` and disregard any new arguments without errors (PR #262)
  228. ### Removed
  229. - Support for Python 3.6 (PR #260)
  230. ### Changed
  231. - Optional speedup provided by mypy/c 1.0.1
  232. ## [3.0.1](https://github.com/Ousret/charset_normalizer/compare/3.0.0...3.0.1) (2022-11-18)
  233. ### Fixed
  234. - Multi-bytes cutter/chunk generator did not always cut correctly (PR #233)
  235. ### Changed
  236. - Speedup provided by mypy/c 0.990 on Python >= 3.7
  237. ## [3.0.0](https://github.com/Ousret/charset_normalizer/compare/2.1.1...3.0.0) (2022-10-20)
  238. ### Added
  239. - Extend the capability of explain=True when cp_isolation contains at most two entries (min one), will log in details of the Mess-detector results
  240. - Support for alternative language frequency set in charset_normalizer.assets.FREQUENCIES
  241. - Add parameter `language_threshold` in `from_bytes`, `from_path` and `from_fp` to adjust the minimum expected coherence ratio
  242. - `normalizer --version` now specify if current version provide extra speedup (meaning mypyc compilation whl)
  243. ### Changed
  244. - Build with static metadata using 'build' frontend
  245. - Make the language detection stricter
  246. - Optional: Module `md.py` can be compiled using Mypyc to provide an extra speedup up to 4x faster than v2.1
  247. ### Fixed
  248. - CLI with opt --normalize fail when using full path for files
  249. - TooManyAccentuatedPlugin induce false positive on the mess detection when too few alpha character have been fed to it
  250. - Sphinx warnings when generating the documentation
  251. ### Removed
  252. - Coherence detector no longer return 'Simple English' instead return 'English'
  253. - Coherence detector no longer return 'Classical Chinese' instead return 'Chinese'
  254. - Breaking: Method `first()` and `best()` from CharsetMatch
  255. - UTF-7 will no longer appear as "detected" without a recognized SIG/mark (is unreliable/conflict with ASCII)
  256. - Breaking: Class aliases CharsetDetector, CharsetDoctor, CharsetNormalizerMatch and CharsetNormalizerMatches
  257. - Breaking: Top-level function `normalize`
  258. - Breaking: Properties `chaos_secondary_pass`, `coherence_non_latin` and `w_counter` from CharsetMatch
  259. - Support for the backport `unicodedata2`
  260. ## [3.0.0rc1](https://github.com/Ousret/charset_normalizer/compare/3.0.0b2...3.0.0rc1) (2022-10-18)
  261. ### Added
  262. - Extend the capability of explain=True when cp_isolation contains at most two entries (min one), will log in details of the Mess-detector results
  263. - Support for alternative language frequency set in charset_normalizer.assets.FREQUENCIES
  264. - Add parameter `language_threshold` in `from_bytes`, `from_path` and `from_fp` to adjust the minimum expected coherence ratio
  265. ### Changed
  266. - Build with static metadata using 'build' frontend
  267. - Make the language detection stricter
  268. ### Fixed
  269. - CLI with opt --normalize fail when using full path for files
  270. - TooManyAccentuatedPlugin induce false positive on the mess detection when too few alpha character have been fed to it
  271. ### Removed
  272. - Coherence detector no longer return 'Simple English' instead return 'English'
  273. - Coherence detector no longer return 'Classical Chinese' instead return 'Chinese'
  274. ## [3.0.0b2](https://github.com/Ousret/charset_normalizer/compare/3.0.0b1...3.0.0b2) (2022-08-21)
  275. ### Added
  276. - `normalizer --version` now specify if current version provide extra speedup (meaning mypyc compilation whl)
  277. ### Removed
  278. - Breaking: Method `first()` and `best()` from CharsetMatch
  279. - UTF-7 will no longer appear as "detected" without a recognized SIG/mark (is unreliable/conflict with ASCII)
  280. ### Fixed
  281. - Sphinx warnings when generating the documentation
  282. ## [3.0.0b1](https://github.com/Ousret/charset_normalizer/compare/2.1.0...3.0.0b1) (2022-08-15)
  283. ### Changed
  284. - Optional: Module `md.py` can be compiled using Mypyc to provide an extra speedup up to 4x faster than v2.1
  285. ### Removed
  286. - Breaking: Class aliases CharsetDetector, CharsetDoctor, CharsetNormalizerMatch and CharsetNormalizerMatches
  287. - Breaking: Top-level function `normalize`
  288. - Breaking: Properties `chaos_secondary_pass`, `coherence_non_latin` and `w_counter` from CharsetMatch
  289. - Support for the backport `unicodedata2`
  290. ## [2.1.1](https://github.com/Ousret/charset_normalizer/compare/2.1.0...2.1.1) (2022-08-19)
  291. ### Deprecated
  292. - Function `normalize` scheduled for removal in 3.0
  293. ### Changed
  294. - Removed useless call to decode in fn is_unprintable (#206)
  295. ### Fixed
  296. - Third-party library (i18n xgettext) crashing not recognizing utf_8 (PEP 263) with underscore from [@aleksandernovikov](https://github.com/aleksandernovikov) (#204)
  297. ## [2.1.0](https://github.com/Ousret/charset_normalizer/compare/2.0.12...2.1.0) (2022-06-19)
  298. ### Added
  299. - Output the Unicode table version when running the CLI with `--version` (PR #194)
  300. ### Changed
  301. - Re-use decoded buffer for single byte character sets from [@nijel](https://github.com/nijel) (PR #175)
  302. - Fixing some performance bottlenecks from [@deedy5](https://github.com/deedy5) (PR #183)
  303. ### Fixed
  304. - Workaround potential bug in cpython with Zero Width No-Break Space located in Arabic Presentation Forms-B, Unicode 1.1 not acknowledged as space (PR #175)
  305. - CLI default threshold aligned with the API threshold from [@oleksandr-kuzmenko](https://github.com/oleksandr-kuzmenko) (PR #181)
  306. ### Removed
  307. - Support for Python 3.5 (PR #192)
  308. ### Deprecated
  309. - Use of backport unicodedata from `unicodedata2` as Python is quickly catching up, scheduled for removal in 3.0 (PR #194)
  310. ## [2.0.12](https://github.com/Ousret/charset_normalizer/compare/2.0.11...2.0.12) (2022-02-12)
  311. ### Fixed
  312. - ASCII miss-detection on rare cases (PR #170)
  313. ## [2.0.11](https://github.com/Ousret/charset_normalizer/compare/2.0.10...2.0.11) (2022-01-30)
  314. ### Added
  315. - Explicit support for Python 3.11 (PR #164)
  316. ### Changed
  317. - The logging behavior have been completely reviewed, now using only TRACE and DEBUG levels (PR #163 #165)
  318. ## [2.0.10](https://github.com/Ousret/charset_normalizer/compare/2.0.9...2.0.10) (2022-01-04)
  319. ### Fixed
  320. - Fallback match entries might lead to UnicodeDecodeError for large bytes sequence (PR #154)
  321. ### Changed
  322. - Skipping the language-detection (CD) on ASCII (PR #155)
  323. ## [2.0.9](https://github.com/Ousret/charset_normalizer/compare/2.0.8...2.0.9) (2021-12-03)
  324. ### Changed
  325. - Moderating the logging impact (since 2.0.8) for specific environments (PR #147)
  326. ### Fixed
  327. - Wrong logging level applied when setting kwarg `explain` to True (PR #146)
  328. ## [2.0.8](https://github.com/Ousret/charset_normalizer/compare/2.0.7...2.0.8) (2021-11-24)
  329. ### Changed
  330. - Improvement over Vietnamese detection (PR #126)
  331. - MD improvement on trailing data and long foreign (non-pure latin) data (PR #124)
  332. - Efficiency improvements in cd/alphabet_languages from [@adbar](https://github.com/adbar) (PR #122)
  333. - call sum() without an intermediary list following PEP 289 recommendations from [@adbar](https://github.com/adbar) (PR #129)
  334. - Code style as refactored by Sourcery-AI (PR #131)
  335. - Minor adjustment on the MD around european words (PR #133)
  336. - Remove and replace SRTs from assets / tests (PR #139)
  337. - Initialize the library logger with a `NullHandler` by default from [@nmaynes](https://github.com/nmaynes) (PR #135)
  338. - Setting kwarg `explain` to True will add provisionally (bounded to function lifespan) a specific stream handler (PR #135)
  339. ### Fixed
  340. - Fix large (misleading) sequence giving UnicodeDecodeError (PR #137)
  341. - Avoid using too insignificant chunk (PR #137)
  342. ### Added
  343. - Add and expose function `set_logging_handler` to configure a specific StreamHandler from [@nmaynes](https://github.com/nmaynes) (PR #135)
  344. - Add `CHANGELOG.md` entries, format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) (PR #141)
  345. ## [2.0.7](https://github.com/Ousret/charset_normalizer/compare/2.0.6...2.0.7) (2021-10-11)
  346. ### Added
  347. - Add support for Kazakh (Cyrillic) language detection (PR #109)
  348. ### Changed
  349. - Further, improve inferring the language from a given single-byte code page (PR #112)
  350. - Vainly trying to leverage PEP263 when PEP3120 is not supported (PR #116)
  351. - Refactoring for potential performance improvements in loops from [@adbar](https://github.com/adbar) (PR #113)
  352. - Various detection improvement (MD+CD) (PR #117)
  353. ### Removed
  354. - Remove redundant logging entry about detected language(s) (PR #115)
  355. ### Fixed
  356. - Fix a minor inconsistency between Python 3.5 and other versions regarding language detection (PR #117 #102)
  357. ## [2.0.6](https://github.com/Ousret/charset_normalizer/compare/2.0.5...2.0.6) (2021-09-18)
  358. ### Fixed
  359. - Unforeseen regression with the loss of the backward-compatibility with some older minor of Python 3.5.x (PR #100)
  360. - Fix CLI crash when using --minimal output in certain cases (PR #103)
  361. ### Changed
  362. - Minor improvement to the detection efficiency (less than 1%) (PR #106 #101)
  363. ## [2.0.5](https://github.com/Ousret/charset_normalizer/compare/2.0.4...2.0.5) (2021-09-14)
  364. ### Changed
  365. - The project now comply with: flake8, mypy, isort and black to ensure a better overall quality (PR #81)
  366. - The BC-support with v1.x was improved, the old staticmethods are restored (PR #82)
  367. - The Unicode detection is slightly improved (PR #93)
  368. - Add syntax sugar \_\_bool\_\_ for results CharsetMatches list-container (PR #91)
  369. ### Removed
  370. - The project no longer raise warning on tiny content given for detection, will be simply logged as warning instead (PR #92)
  371. ### Fixed
  372. - In some rare case, the chunks extractor could cut in the middle of a multi-byte character and could mislead the mess detection (PR #95)
  373. - Some rare 'space' characters could trip up the UnprintablePlugin/Mess detection (PR #96)
  374. - The MANIFEST.in was not exhaustive (PR #78)
  375. ## [2.0.4](https://github.com/Ousret/charset_normalizer/compare/2.0.3...2.0.4) (2021-07-30)
  376. ### Fixed
  377. - The CLI no longer raise an unexpected exception when no encoding has been found (PR #70)
  378. - Fix accessing the 'alphabets' property when the payload contains surrogate characters (PR #68)
  379. - The logger could mislead (explain=True) on detected languages and the impact of one MBCS match (PR #72)
  380. - Submatch factoring could be wrong in rare edge cases (PR #72)
  381. - Multiple files given to the CLI were ignored when publishing results to STDOUT. (After the first path) (PR #72)
  382. - Fix line endings from CRLF to LF for certain project files (PR #67)
  383. ### Changed
  384. - Adjust the MD to lower the sensitivity, thus improving the global detection reliability (PR #69 #76)
  385. - Allow fallback on specified encoding if any (PR #71)
  386. ## [2.0.3](https://github.com/Ousret/charset_normalizer/compare/2.0.2...2.0.3) (2021-07-16)
  387. ### Changed
  388. - Part of the detection mechanism has been improved to be less sensitive, resulting in more accurate detection results. Especially ASCII. (PR #63)
  389. - According to the community wishes, the detection will fall back on ASCII or UTF-8 in a last-resort case. (PR #64)
  390. ## [2.0.2](https://github.com/Ousret/charset_normalizer/compare/2.0.1...2.0.2) (2021-07-15)
  391. ### Fixed
  392. - Empty/Too small JSON payload miss-detection fixed. Report from [@tseaver](https://github.com/tseaver) (PR #59)
  393. ### Changed
  394. - Don't inject unicodedata2 into sys.modules from [@akx](https://github.com/akx) (PR #57)
  395. ## [2.0.1](https://github.com/Ousret/charset_normalizer/compare/2.0.0...2.0.1) (2021-07-13)
  396. ### Fixed
  397. - Make it work where there isn't a filesystem available, dropping assets frequencies.json. Report from [@sethmlarson](https://github.com/sethmlarson). (PR #55)
  398. - Using explain=False permanently disable the verbose output in the current runtime (PR #47)
  399. - One log entry (language target preemptive) was not show in logs when using explain=True (PR #47)
  400. - Fix undesired exception (ValueError) on getitem of instance CharsetMatches (PR #52)
  401. ### Changed
  402. - Public function normalize default args values were not aligned with from_bytes (PR #53)
  403. ### Added
  404. - You may now use charset aliases in cp_isolation and cp_exclusion arguments (PR #47)
  405. ## [2.0.0](https://github.com/Ousret/charset_normalizer/compare/1.4.1...2.0.0) (2021-07-02)
  406. ### Changed
  407. - 4x to 5 times faster than the previous 1.4.0 release. At least 2x faster than Chardet.
  408. - Accent has been made on UTF-8 detection, should perform rather instantaneous.
  409. - The backward compatibility with Chardet has been greatly improved. The legacy detect function returns an identical charset name whenever possible.
  410. - The detection mechanism has been slightly improved, now Turkish content is detected correctly (most of the time)
  411. - The program has been rewritten to ease the readability and maintainability. (+Using static typing)+
  412. - utf_7 detection has been reinstated.
  413. ### Removed
  414. - This package no longer require anything when used with Python 3.5 (Dropped cached_property)
  415. - Removed support for these languages: Catalan, Esperanto, Kazakh, Baque, VolapΓΌk, Azeri, Galician, Nynorsk, Macedonian, and Serbocroatian.
  416. - The exception hook on UnicodeDecodeError has been removed.
  417. ### Deprecated
  418. - Methods coherence_non_latin, w_counter, chaos_secondary_pass of the class CharsetMatch are now deprecated and scheduled for removal in v3.0
  419. ### Fixed
  420. - The CLI output used the relative path of the file(s). Should be absolute.
  421. ## [1.4.1](https://github.com/Ousret/charset_normalizer/compare/1.4.0...1.4.1) (2021-05-28)
  422. ### Fixed
  423. - Logger configuration/usage no longer conflict with others (PR #44)
  424. ## [1.4.0](https://github.com/Ousret/charset_normalizer/compare/1.3.9...1.4.0) (2021-05-21)
  425. ### Removed
  426. - Using standard logging instead of using the package loguru.
  427. - Dropping nose test framework in favor of the maintained pytest.
  428. - Choose to not use dragonmapper package to help with gibberish Chinese/CJK text.
  429. - Require cached_property only for Python 3.5 due to constraint. Dropping for every other interpreter version.
  430. - Stop support for UTF-7 that does not contain a SIG.
  431. - Dropping PrettyTable, replaced with pure JSON output in CLI.
  432. ### Fixed
  433. - BOM marker in a CharsetNormalizerMatch instance could be False in rare cases even if obviously present. Due to the sub-match factoring process.
  434. - Not searching properly for the BOM when trying utf32/16 parent codec.
  435. ### Changed
  436. - Improving the package final size by compressing frequencies.json.
  437. - Huge improvement over the larges payload.
  438. ### Added
  439. - CLI now produces JSON consumable output.
  440. - Return ASCII if given sequences fit. Given reasonable confidence.
  441. ## [1.3.9](https://github.com/Ousret/charset_normalizer/compare/1.3.8...1.3.9) (2021-05-13)
  442. ### Fixed
  443. - In some very rare cases, you may end up getting encode/decode errors due to a bad bytes payload (PR #40)
  444. ## [1.3.8](https://github.com/Ousret/charset_normalizer/compare/1.3.7...1.3.8) (2021-05-12)
  445. ### Fixed
  446. - Empty given payload for detection may cause an exception if trying to access the `alphabets` property. (PR #39)
  447. ## [1.3.7](https://github.com/Ousret/charset_normalizer/compare/1.3.6...1.3.7) (2021-05-12)
  448. ### Fixed
  449. - The legacy detect function should return UTF-8-SIG if sig is present in the payload. (PR #38)
  450. ## [1.3.6](https://github.com/Ousret/charset_normalizer/compare/1.3.5...1.3.6) (2021-02-09)
  451. ### Changed
  452. - Amend the previous release to allow prettytable 2.0 (PR #35)
  453. ## [1.3.5](https://github.com/Ousret/charset_normalizer/compare/1.3.4...1.3.5) (2021-02-08)
  454. ### Fixed
  455. - Fix error while using the package with a python pre-release interpreter (PR #33)
  456. ### Changed
  457. - Dependencies refactoring, constraints revised.
  458. ### Added
  459. - Add python 3.9 and 3.10 to the supported interpreters
  460. MIT License
  461. Copyright (c) 2019 TAHRI Ahmed R.
  462. Permission is hereby granted, free of charge, to any person obtaining a copy
  463. of this software and associated documentation files (the "Software"), to deal
  464. in the Software without restriction, including without limitation the rights
  465. to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
  466. copies of the Software, and to permit persons to whom the Software is
  467. furnished to do so, subject to the following conditions:
  468. The above copyright notice and this permission notice shall be included in all
  469. copies or substantial portions of the Software.
  470. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
  471. IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
  472. FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
  473. AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
  474. LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
  475. OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
  476. SOFTWARE.